Nagios and Ubuntu 64-bit 14.04 LTS Setup & Configuration

1st – The Virtual Machine

First I created a virtual machine for use with VMware Fusion on OS-X. Once I got a nice clean Ubuntu 14.04 image setup I installed SSH on it so I could manage it as if it were a headless (i.e. no monitor attached) machine (instructions).

In addition to installing openssh, these steps also include build-essential, make, and gcc along with instructions for, but don’t worry about installing VMware Tools. The instructions are cumbersome and in parts just wrong, so skip that. The virtual machine is up and running with ssh and a good C compiler at this point, so we’re all set.

2nd – The LAMP Stack

sudo apt-get apache2

Once installed the default page will be available on the server, so navigate over to 192.168.x.x and view the page to insure it is up and running.


Next install mysql and php5 mysql.

sudo apt-get install mysql-server php5-mysql

During this installation you will be prompted for the mysql root account password. It is advisable to set one.

Then you will be asked to enter the password (the one you just set about 2 seconds ago) for the MySQL root account. Next, it will ask you if you want to change that password. Select ‘n’ so as not to create another password for the root acount since you’ve already created the password just a few seconds before.

For the rest of the questions, you should simply hit the enter key for each prompt. This will accept the default values. This will remove some sample users and databases, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.

Next up is to install PHP. No grumbling, just install PHP.

sudo apt-get install php5 libapache2-mod-php5 php5-mcrypt

Next let’s open up dir.conf and change a small section to change what files apache will provide upon request. Here’s what the edit should look like.

Open up the file to edit. (in vi, to insert or edit, hit the ‘i’ button. To save hit escape and ‘:w’ and to exit vi after saving it escape and then ‘:q’. To force exit without saving hit escape and ‘:q!’)

sudo vi /etc/apache2/mods-enabled/dir.conf

This is what the file will likely look like once opened.

<IfModule mod_dir.c>
DirectoryIndex index.html index.cgi index.php index.xhtml index.htm

Move the index.php file to the beginning of the DirectoryIndex list.

<IfModule mod_dir.c>
DirectoryIndex index.php index.html index.cgi index.xhtml index.htm

Now restart apache so the changes will take effect.

sudo service apache2 restart

Next let’s setup some public key for authentication. On your local box complete the following.


If you don’t enter a passphrase, you will be able to use the private key for auth without entering a passphrase. If you’ve entered one, you’ll need it and the private key to log in. Securing your keys with passphrases is more secure, but either way the system is more secure this way then with basic password authentication. For this particular situation, I’m skipping the passphrase.

What is generated is id_rsa, the private key and the the public key. They’re put in a directory called .ssh of the local user.

At this point copy the public key to the remote server. On OS-X grab the easy to use ssh-copy-id script with this command.

brew install ssh-copy-id


curl -L | sh

Then use the script to copy the ssh key to the server.

ssh-copy-id adron@192.168.x.x

Next let’s setup some public key for authentication. On your local box complete the following.


If you don’t enter a passphrase, you will be able to use the private key for auth without entering a passphrase. If you’ve entered one, you’ll need it and the private key to log in. Securing your keys with passphrases is more secure, but either way the system is more secure this way then with basic password authentication. For this particular situation, I’m skipping the passphrase.

What is generated is id_rsa, the private key and the the public key. They’re put in a directory called .ssh of the local user.

At this point copy the public key to the remote server. On OS-X grab the easy to use ssh-copy-id script with this command.

brew install ssh-copy-id


curl -L | sh

Then use the script to copy the ssh key to the server.

ssh-copy-id adron@192.168.x.x

That should give you the ability to log into the machine without a password everytime. Give it a try.

Ok, so now on to the meat of this entry, Nagios itself.

Nagios Installation

Create a user and group that will be used to run the Nagios process.

sudo useradd nagios
sudo groupadd nagcmd
sudo usermod -a -G nagcmd nagios

Install these other essentials.

sudo apt-get install libgd2-xpm-dev openssl libssl-dev xinetd apache2-utils unzip

Download the source and extract it, then change into the directory.

curl -L -O
tar xvf nagios-*.tar.gz
cd nagios-*

Next run the command to configure Nagios with the appropriate user and group.

./configure --with-nagios-group=nagios --with-command-group=nagcmd

When the configuration is done you’ll see a display like this.

Creating sample config files in sample-config/ ...

*** Configuration summary for nagios 4.1.1 08-19-2015 ***:

General Options:
Nagios executable: nagios
Nagios user/group: nagios,nagios
Command user/group: nagios,nagcmd
Event Broker: yes
Install ${prefix}: /usr/local/nagios
Install ${includedir}: /usr/local/nagios/include/nagios
Lock file: ${prefix}/var/nagios.lock
Check result directory: ${prefix}/var/spool/checkresults
Init directory: /etc/init.d
Apache conf.d directory: /etc/httpd/conf.d
Mail program: /bin/mail
Host OS: linux-gnu
IOBroker Method: epoll

Web Interface Options:
HTML URL: http://localhost/nagios/
CGI URL: http://localhost/nagios/cgi-bin/
Traceroute (used by WAP):

Review the options above for accuracy. If they look okay,
type 'make all' to compile the main program and CGIs.

Now run the following make commands. First run make all as shown.

make all

Once that runs the following will be displayed upon success. I’ve included it here as there are a few useful commands in it.

*** Compile finished ***

If the main program and CGIs compiled without any errors, you
can continue with installing Nagios as follows (type 'make'
without any arguments for a list of all possible options):

make install
- This installs the main program, CGIs, and HTML files

make install-init
- This installs the init script in /etc/init.d

make install-commandmode
- This installs and configures permissions on the
directory for holding the external command file

make install-config
- This installs *SAMPLE* config files in /usr/local/nagios/etc
You'll have to modify these sample files before you can
use Nagios. Read the HTML documentation for more info
on doing this. Pay particular attention to the docs on
object configuration files, as they determine what/how
things get monitored!

make install-webconf
- This installs the Apache config file for the Nagios
web interface

make install-exfoliation
- This installs the Exfoliation theme for the Nagios
web interface

make install-classicui
- This installs the classic theme for the Nagios
web interface

*** Support Notes *******************************************

If you have questions about configuring or running Nagios,
please make sure that you:

- Look at the sample config files
- Read the documentation on the Nagios Library at:

before you post a question to one of the mailing lists.
Also make sure to include pertinent information that could
help others help you. This might include:

- What version of Nagios you are using
- What version of the plugins you are using
- Relevant snippets from your config files
- Relevant error messages from the Nagios log file

For more information on obtaining support for Nagios, visit:



After that successfully finishes, then execute the following.

sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config
sudo /usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/sites-available/nagios.conf

Now some tinkering to setup the web server user in www-data and nagcmd group.

sudo usermod -G nagcmd www-data

Now some Nagios plugins. You can find the plugins listed for download here: The following are based on the 2.1.1 release of plugins.

Change back out to the user directory on the server and download, tar, and change into the newly unzipped files.

cd ~
curl -L -O
tar xvf nagios-plugins-*.tar.gz
cd nagios-plugins-*
./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl

Now for some ole compilation magic.

sudo make install

Now pretty much the same things for NRPE. Look here to insure that 2.15 is the latest version.

cd ~
curl -L -O
tar xvf nrpe-*.tar.gz
cd nrpe-*

Then configure the NRPE bits.

./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu

Then get to making it all.

make all
sudo make install
sudo make install-xinetd
sudo make install-daemon-config

Then a little file editing.

sudo vi /etc/xinetd.d/nrpe

Edit the file for the line only_from to include the following where 192.x.x.x is the IP of the Nagios Server.

only_from = 192.x.x.x

Save the file, and restart the Nagios server service.

sudo service xinetd restart

Now begins the Nagios Server configuration. Edit the Nagios configuration file.

sudo vi /usr/local/nagios/etc/nagios.cfg

Find this line and uncomment the line.


Save it and exit.

Next creat the configuration file for the servers to monitor.

sudo mkdir /usr/local/nagios/etc/servers

Next configure the contacts config file.

sudo vi /usr/local/nagios/etc/objects/contacts.cfg

Fine this line and set the email address to one you’ll be using.


Now add a Nagios service definition for the check_nrpe command.

sudo vi /usr/local/nagios/etc/objects/commands.cfg

Add this to the end of the file.

define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$

Save and exit the file.

Now a few last touches for configuration in Apache. We’ll want the Apache rewrite and cgi modules enabled.

sudo a2enmod rewrite
sudo a2enmod cgi

Now create an admin user, we’ll call them ‘nagiosadmin’.

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Create a symbolic link of nagios.conf to the sites-enabled directory and then start the Nagios server and restart apache2.

sudo ln -s /etc/apache2/sites-available/nagios.conf /etc/apache2/sites-enabled/
sudo service nagios start
sudo service apache2 restart

Enable Nagios to start on server boot (because, ya know, that’s what this server is going to be used for).

sudo ln -s /etc/init.d/nagios /etc/rcS.d/S99nagios

Now navigate to the server and you’ll be prompted to login to the web user interface.


Now begins the process of setting up servers you want to monitor… stay tuned, more to come.

Kafka/Visual Studio Code/OSS… Distibuted Consensus, Things to Learn

A Few Streams to Learn From: Apacha Kafka!

Here I am in the middle of Qcon SF, about to enjoy the “Demystifying Stream Processing with Apache Kafka” talk with Neha Narkhede @nehanarkhede. The background on this talk is rooted in Neha being a co-founder of, with co-founder Jay Kreps of Karka co-creation fame. Neha is providing a fundamental talk on the insight and usage of streams across distributed systems.

That second tweet was of the room before we had to move to the keynote space to make room for everybody that was interested in the topic! Holy snikies!

If you’d like to read some more information on Kafka and streaming, check out some of these posts.

I’m looking forward to digging into Kafka and various uses in the coming weeks. My current job (more on that REAL soon, and yes I said job). I’ve got some heavy data (big data just isn’t even discriptive, and I’m going with the Marty McFly terminology of “Heavy” and adding “Data” to form a more descriptive and unique term).  ;)

Visual Studio Code goes OSS & more Wicked F#!

As I’m sitting listening to Neha’s talk I see a stream (because I multi-task like a crazy person) of things getting mentioned about something something OSS and something something F# and something something Visual Studio Code. So even though we’re heavy into the middle of compaction, stream processings, discussions of queue points and how to manage so many things Kafka using a library with kstream DSL, processor API, and interfaces in a library Neha is discussing. It’s very interesting so

I’m going back and forth between what Neha is talking about, taking notes on the specific topic points I’ll need to research after her talk, and reading up on these something something OSS something something F# somethign something Visual Studio Code tweets. Then I stumbled into the rabbit hole of goodies that I was seeing…

Visual Studio Code is OSS now w/ F# Goodies!


At least, that’s my first response because this fixes my #1 complaint about Visual Studio Code. I hated that it wasn’t open source, when there was very little reason for it to be closed source. So much of it was open already, it just seemed confusingly absurd that it wasn’t open source. But here it is, wide open and ready for PRs yo!

The Marketplace for Visual Studio now has a few new goodies for F# too which is excellent!


Most of this is mentioned on the Visual Studio Code blog of course, but I’m outlining a few of the bits here, since I know not too many follow the VSC blog that read my blog – for various good reasons!  ;)

With that, I leave you with the two key tidbits that worked their way into my brain while I enjoyed learning about Kafka in Neha’s talk. Cheers!



A Small Rant About Being IDE-Dependent

{Sort of a Rant}

I recently saw this tweet.

I responded with this tweet.

Now I need to describe some context around this response real quick. Here are the key points behind this tweet from my context in the software development industry.

  1. I’m not a .NET Developer. I’m a software developer, or application programming, coder, hacker or more accurately I’m a solutions developer. I don’t tie myself to one stack. I’ve developed with C# & VB.NET with Visual Studio. I’ve done C++ and VB 4. I’ve done VBA and all sorts of other things in the Microsoft space on Windows and for the web with Windows technology. About 5 years ago I completely dropped that operating system dependency and have been free of it for those 5 years. I doubt I ever want to couple an application up to that monstrous operating system again, it is far to limiting and has no significant selling point anymore.
  2. In addition to the .NET code I’ve written over the years I’ve also built a few dozen Erlang Applications (wish I’d known about Elixir at the time), built a ton of Node.js based API and web applications, and have even done some Ruby on Rails and Sinatra. These languages and tooling stacks really introduced me to reality I longed for. A clean, fast, understandable reality. A reality with configuration files that are readable and convention that gives me real power to get things done and move on to the next business need. These stacks gave me the ability to focus more on code and business and research value and less on building the tool stack up just to get one single deployment out the door. Since, I’ve not looked back, because the heavy handed stacks of legacy Java, .NET, and related things have only laid heavily on small business and startups trying to add business value. In the lands I live in, that is San Francisco to Vancouver BC and the cities in between, Java and .NET can generally take a hike. Because businesses have things to do, namely make money or commit to getting research done and not piddling around building a tool stack up.

So how was this new reality created? It isn’t a new one, really the new reality was this heavy handed IDE universe of fat editors that baby a developer through every step. They create laziness in thinking and do NOT encourage a lean, efficient, simple way to get an application built and running. It doesn’t have to be this way. A small amount of developer discipline, or dare I say craftsmanship, and most of these fat IDEs and tightly coupled projects with massive XML files for configuration in their unreadable catastrophofuck of spaghetti can just go away.

Atom, Sublime, and other editors before that harnessed this clean, lean, and fast approach to software development. A Node.js Project only needs one file and one command to start a project.

npm init

A Ruby on Rails Project is about the same.

rails new path/to/your/new/application

That’s it, and BOOM you’ve got an application project base to start from.

Beyond that however, using a tool like Yeoman opens up a very Unix style way of doing something. Yeoman has a purpose, to build scaffolding for applications. That is its job and it does that job well. It has a pluggable architecture to enable you to build all sorts of scaffold packaging that you want. Hell, you can even create projects with the bloated and unreadable XML blight that pervades many project types out there from enterprisey platforms.

Take a look at Yeoman, it is well worth a look. This is a tool that allows us to keep our tooling, what we do use, loosely coupled so we don’t have a massive bloated (5+ GB) installation of nonsense to put on our machines. Anybody on any platform can load up Yeoman, and grab an editor like Atom, Submlime, Visual Studio Code (not to be confused with the bloated Visual Studio) and just start coding!

Take a look at some of the generators that yeoman will build projects for you with -> There are over 1500! No more need to tightly couple this into an IDE. Just provide a yeoman plugin in your editor of choice. Boom, all the features you need without the tight coupling in editor! Win, win, win, win, and win!

As a bit more of my rant about bloated editors, and don’t get me wrong, I love some of the features of the largesse of some like Visual Studio and WebStorm. But one huge rant I have is the absolutely zero motivation or vested interest that a community has to add features, bug fix, or do anything in relation to editors like Visual Studio or WebStorm. ZERO reason to help, only to file complaints or maybe file a bug complaint. That’s cool, I’m glad that the editors exist but this model isn’t the future. It’s slowly dying and having tightly coupled, slow, massively bloated editors isn’t going to be sustainable. They don’t adapt to new stacks, languages, or even existing ones.

Meanwhile, Atom, Sublime, and even Visual Studio Code and other lean editors have plugin and various adaptar patterns that allow one to add support for new languages, stacks, frameworks, and related tooling and not have tight coupling to that thing. This gives these editors the ability to add features and capabilities for stacks at a dramatically faster rate than the traditional fat IDEs.

This is the same idea toward tooling that is used in patterns that a software developer ought to know. Think of patterns like seperation of concerns and loose coupling, don’t tie yourself into an untenable and frustrating toolchain. Keep it loosely coupled and I promise, your life will get easier and that evening beer will come a little bit sooner.

{Sort of a Rant::End}

…and this is why I say, do NOT tie project creation into an editor. Instead keep things loosely coupled and let’s move into the future.

The Question of Docker, The Future of OS Virtualization

In this article I’m going to take a look at Docker and OS Virtualization autonomously of each other. There’s a reason, which will unfold as I dig through some data and provide this look into what is and isn’t happening in the virtualization space.

It’s important to also note what methods were used to attain the information provided in this article. I have obtained information through speaking with Docker employees and key executives including Ben Golub and founder Solomon Hykes over the years since the founding of Docker (and it’s previous incarnation dotCloud, before the pivot and name change to Docker).

Beyond communicating directly with the Docker team and gaining insight from them I have also done a number of interviews over the course of 4 days. These interviews have followed a fairly standard set of questions and conversation about the Docker technology, including but not limited to the following questions.

  • What is your current use of Docker visualization technologies?
  • What is your future intended use of Docker technologies?
  • What is the general current configuration and setup of your development team(s) and tooling that they use (i.e. stack: .NET, java, python, node.js, etc)
  • Do you find it helps you to move forward faster than without?

The History of OS-Level Virtualization

First, let’s take a look at where virtualization has been, then I’ll dive into where it is now, and then I’ll take a look at where it appears to be going in the future and derive some information from the interviews and discussions that I’ve had with various teams over the last 4 days.

The Short of It

OS-level virtualization is a virtualization application that allows the installation of software in a complete file system, just like a hypervisor based virtualization server, but dramatically faster installation and prospectively speed overall by using the host OS for OS-level virtualization. This cuts down on excess redundancies
within the core system and the respective virtual clients on the host.

Virtualization in concept has been around since the 1960s, with IBM being heavily involved at the Cambridge Scientific Center. Over time developments continued, but the real breakthrough in pushing virtualization into the market was VMware in 1999 with their virtual platform. This, hypervisor level virtualization great into a huge industry with the help of VMware.

However OS-level virtualization, which is what Docker is based on, didn’t take off immediately when introduced. There were many product options that came out over time around OS-level virtualization, but nothing made a huge splash in the industry similar to what Docker has. Fast forward to today and Docker was released in 2013 to an ever increasing developer demand and usage.

Timeline of Virtualization Development

Docker really brought OS-level virtualization to the developer community at the right time in regards to demands around web development and new ways to implement effective continuous delivery of applications. Docker has been one of the most extensively used OS-level virtualization tools to implement immutable infrastructure, continuous build, integration, and deployment environments, and to use as a general virtual environment to spool up resources as needed for development.

Where we Are With Virtualization

Currently Docker holds a pretty dominant position in the OS-level virtualization market space. Let’s take a quick review of their community statistics and involvement from just a few days ago.

The Stats: Docker on Github ->

Watchers: 2017
Starred: 22941
Forks: 5617

16,472 Commits
3 Branches
102 Releases
983 Contributors

Just from that data we can ascertain that the Docker Community is active. We can also take a deep look into the forks and determine pull requests, acceptance of and related data to find out that the overall codebase is healthy with involvement. This is good to know since at one point there were questions if Docker had the capability to manage the open source legions pushing the product forward while maintaining the integrity, reputation, and quality of the product.

Now let’s take a look at what that position is based on considering the interviews I’ve had in the last 4 days. Out of the 17 people I spoke with all knew what Docker is. That’s a great position to be in compared to just a few years ago.

Out of the 17 people I spoke with, 15 of the individuals are working on teams that have, are implementing or are in some state between having and implementing Docker into their respective environments.

Of the 17, only 13 said they were concerned in some significant way about Docker Security. All of these individuals were working on teams attempting to figure out a way to use Docker in a production way, instead of only in development or related uses.

The list of uses that the 17 want to use or are using Docker for vary as much as the individual work that each is currently working on. There are however some core similarities in what they’re working on where Docker comes into play.

The most common similarity among Docker uses is simply as a platform to build out development testing environments or test servers. This is routinely a database server or simple distributed database like Cassandra or Riak, that can be built immutably, then destroyed and recreated whenever it is needed again for test and development. Some of the build outs are done with Docker specifically to work up a mock distributed database environment for testing. Mind you, I’m probably hearing about and seeing this because of my past work with Basho and other distributed systems programmers, companies, and efforts around this type of technology. It’s still interesting and very telling none the less.

The second most common usage is for Docker to be used somewhere in the continuous delivery chain. The push to move the continuous integration and delivery process to a more immutable, repeatable, and reliable process has been a perfect marriage between Docker and these needs. The ability to spin up entire environments in a matter of seconds and destroy them on whim, creating them again a matter of moments later, as made continuous delivery more powerful and more possible than it has ever been.

Some of the less common, yet still key uses of Docker, that came up during the interviews included; in memory cache servers, network virtualization, and distributed systems.

Virtualization’s Future


With the history covered, the core uses of Docker discussed, let’s put those on the table with the acquisitions. The acquisitions by Docker have provided some insight into the future direction of the company. The acquisitions so far include: Kitematic, SocketPlane, Koality, and Orchard.

From a high level strategic play, the path Docker is pushing forward into is a future of continued virtualization around, as the hipsters might say “all the things”. With their purchase of Kitematic and SocketPlane. Both of these will help Docker expand past only OS virtualization and push more toward systemic virtualization of network environments with programmatic capabilities and more. These are capabilities that are needed to move past the legacy IT environments of yesteryear which will open up more enterprise possibilities too.

To further their core use that exists today, Docker has purchased Koality. Koality provides parallelizable continuous integration, deployment, and related services. This enables Docker to provide more built out services around this very important.

The other acquisition was Orchard ( This is a startup that provides a Docker host in the cloud, instantly. This is a similar purchase as the Koality one. It bulks up capabilities that Docker had some level of already. It also pushes them forward with two branches of capabilities: SaaS based on the web and prospectively offering something behind the firewall, which the Koality acquisition might have some part to play also.

Threat Vectors

Even though the pathways toward the future seem clear for Docker in many ways, in other ways they see dramatically less clear. For one, there are a number of competitive options that are in play now, gaining momentum and on the horizon. One big threat is Google’s lack of interest in Docker has led them to build competing tooling. If they push hard into the OS level virtualization space they could become a substantial threat.

The other threat vector, is the simple unknown of what could become a threat. Something like Mesos might explode in popularity and determine it doesn’t want to use Docker, and focus on another virtualization path. In the same sense, Mesos could commoditize Docker to a point that the value add at that level of virtualization doesn’t retain a business market value that would sustain Docker.

The invisible threat around this area right now is fairly large. There’s no greater way to determine this then to just get into a conversation with some developers about Docker. In one sense they love what it allows them to do, but the laundry list of things they’d like would allow for a disruptor to come in and steal the Docker thunder pretty easily. To put it simply, there isn’t a magical allegiance to Docker, developers will pick what helps them move the ball forward the fastest and easiest.

Another prospective threat is a massive purchase by a legacy software company like Oracle, Microsoft, or someone else. This could effectively destabilize the OSS aspects of the product and slow down development and progress, yet it could increase corporate adoption many times over what it is now. So this possibility is something that shouldn’t be ruled out.


Docker has two major threats: the direct competitor and their prospectively being leapfrogged by another level of virtualization. The other prospective threat to part of the company is acquisition of Docker itself, while it could mean a huge increase in enterprise penetration. In the future path the company and technology is moving forward in, there will be continued growth in usage and capabilities. The growth will maintain in the leading technology startups and companies of this kind, while the mid-size and larger corporate environments will continue to adopt and deploy at a slower pace.

A Question for You

I’ve put together what I’ve noticed, and I’d love to see things that you dear reader might notice about the Docker momentum machine. Do you see networking as a strength, other levels of virtualization, deployment of machines, integration or delivery, or some other part of this space as the way forward into the future. Let me know what your thoughts are on Twitter or whatever medium you feel like reaching out on. Of course, I’d also love to know if you think I’m wrong about anything I’ve written here.

Truly Excellent People and Coding Inspiration…

.NET Fringe took place this last week. It’s been a rather long time since my last actual conference that I actually got to really attend, meet people, and talk to people about all the different projects, aspirations, goals, and ideas about what’s next for the future. This conference was perfect to jump into, first and foremost, I knew it was an effort in being inclusive of the existing community and newcomers. We’d reached out to many brave souls to come and attend this conference about pushing technology into the future.

I met some truly excellent people. Smart, focused, intent, and a whole lot of great conversations followed meeting these people. Here’s a few people you’ll want to keep an eye on based on the technology they’re working on. I got to sit down and talk to every one of these coders and they’re in top form, smart, inventive, witty and full of great humor to boot!

Maria Naggaga @Twitter

I met Maria and one of the first things I saw was her crafty and most excellent art sketches around lifestyles, heroes, and more. I love art like this, and was really impressed with what Maria had done with her’s.

Maria giving us the info.

Maria giving us the info.

I was able to hang out with Maria a bit more and had some good conversation time talking about evangelism, tech fun and nonsense all around. I also was able to attend her talk on “Legacy… What?” which was excellent. The question she posed in the description states a common question posed, “When students think about .Net they think: legacy , enterprise , retired, and what is that?” which I too find to be a valid thought. Is .NET purely legacy these days? For many getting into the field it generally isn’ the landscape of greenfield applications and is far more commonly associated with legacy applications. Hearing her vantage point on this as an evangelist was eye opening. I gained more ideas, thoughts, and was pushed to really get that question answered for students in a different way…  which I’ll add to sometime in the future in another blog entry.

Kathleen Dollard @Twitter && @Github

I spoke to Kathleen while we took a break across the street from the conference at Grendal’s Coffee Shop. We talked a lot about education and what is effective training, diving heavily into what works around video, samples, and related things. You see, we’re both authors at Pluralsight too and spend a lot of time thinking about these things. It was great to be able to sit down and really discuss these topics face to face.

We also dived into a discussion about city livability and how Portland’s transit system works, what is and isn’t working in the city and what it’s like to live here. I was, of course, more than happy to provide as much information as I could.

We also discussed her interest in taking legacy shops (i.e. pre-C# even, maybe Delphi or whatever might exist) and helping them modernize their shop. I found this interesting, as it could be a lot of fun figuring out large gaps in technology like that and helping a company to step forward into the future.

Kathleen gave two presentations at the conference – excellent presentations. One was the “Your Code, Your Brain” presentation, talking about exactly the topic of legacy shops moving forward without disruption.

If you’re interested in Kathleen’s courses, give a look here.

Amy Palamountain @Twitter && @Github

Amy had a wicked great slides and samples that were probably the most flawless I’ve seen in a while. Matter of fact, a short while after the conference Amy put together a blog entry about those great slides and samples “Super Smooth Technical Demoes“.

An intent and listening audience.

An intent and listening audience.

An intent and listening audience.Amy’s talked at the conference was titled “Space, Time, and State“. It almost sounds like we could just turn that into an acronym. The talk was great, touched on the aspects of reactiveness and the battle of state that we developers fight every day while building solutions.

We also got to talk a little after the presentation, the horror of times zones, and a slew of good conversation.

Tomasz Janczuk @Twitter && @Github

AAAAAaggghhhhhh! I missed half of Tomasz’s talk! It always happens at every conference right! You get to talking to people, excited about this topic or that topic and BOOM, you’ve missed half of a talk that you fully intended to attend. But hey, the good part is I still got to see half the talk!

If you’re not familiar with Tomasz’ work and you do anything with Node.js you should pay close attention. Tomasz has been largely responsible for the great work behind Edge.js and influencing the effort to get Node.js running (and running damn well might I add) on Windows. For more on Edge.js check out Act I and Act II and the Github repository.

The Big Hit for Me, Distributed Systems

First some context. About 4 years ago I left the .NET Community almost entirely. Even though I was still doing a little work with C# I primarily switched stacks to other things to push forward with Riak, distributed systems usage, devops deployment of client apps, and a whole host of other things. At the time I basically had gotten real burned out on where the .NET Community had ended up worldwide, while some pushed onward with the technologies I loved to work with, I was tired of waiting and dived into some esoteric stuff and learned strange programming techniques in JavaScript, Ruby, Erlang and dived deeper into distributed technologies for use in application construction.

However some in the community didn’t stop moving the ball forward, and at this conference I got a great view into some of that progress! I’m stoked to see this technology and where it is now, because there is a LOT of potential for a number of things. Here’s the two talks and two more great people I got to see speak. One I knew already (great to see you again and hang out Aaron!) and one I had the privilege & honor to meet (it was most excellent hanging out and seeing your presentation Lena).

Aaron Stonnard @Twitter && @Github

Aaron I’d met back when Troy & I put together the first Node PDX. Aaron had swung into Portland to present on “Building Node.js Applications on Windows Azure“. At .NET Fringe however Aaron was diving into a topic that was super exciting to me. The first line of the description from the topic really says it all “Distributed computing in .NET isn’t something you often hear about, but it’s becoming an increasingly important area for growing .NET businesses around the globe. And frankly it’s an area where .NET has lagged behind other runtimes and platforms for years – but this is changing!“. Yup, that’s my exact pain point, it’s awesome to know Aaron & Petabridge are kicking ass in this space now.

Aaron’s presentation was solid, as to be expected. We also had some good conversations after and before the presentation about the state of distributed compute and systems within the Microsoft and Windows ecosystem. To check out more about Akka .NET that Aaron & Andrew Skotzko …  follow @AkkaDotNet, @aaronontheweb, @petabridge, and @askotzko.

Akka .NET

Alena Dzenisenka @Twitter && @Github


…Lena traveled all the way from Kiev in the Ukraine to provide the .NET Fringe crowd with some serious F# distributed and parallel compute knowledge in “Embracing the Cloud“!  (Slides here)

Here’s a short dive into F# here if you’re unfamiliar, which you can install on OS-X, Windows or whatever. So don’t use the “well, I don’t use windows” excuse to not give it a try! Here’s info about MBrace that  Lena also used in her demo. Also dive into brisk from elastacloud…

In addition to the excellent talk that Lena gave I also got to hang out with her, Phil Haack, Ryan Riley, and others over food at Biwa on the last day of the conference. After speaking with Lena about the Ukraine, computing, coding and other topics around hacking and the OSS Community she really inspired me to take a dive into these tools for some of the work that I’m working on now and what I’ll be doing in the near future.

All The Things

Now of course, there were a ton of other people I got to meet, people I got to catch up with I haven’t seen in ages and others I didn’t get to write about. It was a really great conference with great content. I’m looking forward to round 2 and spending more time with everybody in the future!

The whole bunch of us at the end of the conference!

The whole bunch of us at the end of the conference!

Cheers everybody!   \m/

An Aside of Blog Entries on .NET Fringe

Here are some additional blog entries that others wrote about the event. In addition to these blog entries I’ll be updating this entry with any additional entries that I see pop up – so if you post one let me know, and I’ll also update these talks above that I’ve discussed with videos when they’re posted live.

__2 “Starting a Basic Loopback API & Continuous Integration”

In this article Keartida is going to dive into setting up a basic Loopback API project and get a build of that project running on a continuous integration service. In this example she’s going to get the project setup with Codeship.


  • Be sure, whichever system you are using, to have a C++ compiler installed. For Windows that usually means installing Visual Studio or something, on OS-X install XCode and the Developer Tools. On Ubuntu the GCC compiler and other options exist. For instructions on OS-X and Linux check out installing compiler tools.
  • Ubuntu
  • OS-X
  • For windows, I’d highly suggest setting up a VM of Ubuntu to do any work with Loopback, Node.js, or follow along with this material. It’s possible on Windows, but there are a number of things that are lacking. If you still want to make a go of using Windows, here are some initial setup steps here.

Nice to Haves:

  • git-flow – works on any bash, handles the branching and merging. Very nice scripts to have.
  • bashit – Adding more information to the bash prompt (works on OS-X, not Ubuntu or Windows Bash)

Continue reading

__1 “Getting Started, Kanban & First Steps for a Sharing App”

This is the first (of course the precursor to this entry was the zero day team introduction article) of an ongoing series I’m going to put together. I’m going to write this series from the context of a team building a product. I’ll have code samples and more as I work along through the material.

The first step included Oi Elffaw having a discussion with the team to setup the first week’s working effort. Oi decided to call it a sprint and the rest of the team decided that was cool too. This was week one after all and there wasn’t going to be much else besides testing, research, and setup that took place.


Before starting everything I went ahead and created a project repository on github for Oi to use with. is an online service that works with github issues to provide a kanban style inferface to the issues. This provides an easier view, especially for leads and management, to get insight into where things are and what’s on the plate for the team for the week. I included the default node.js .gitignore file and an Apache 2.0 license when I created the repository. Github then seeds the project with a .gitignore, and the license files.

After setting up the repository in github I pinged Oi and he set to work after the team’s initial meet to discuss what week one would include. Continue reading