Nagios and Ubuntu 64-bit 14.04 LTS Setup & Configuration

1st – The Virtual Machine

First I created a virtual machine for use with VMware Fusion on OS-X. Once I got a nice clean Ubuntu 14.04 image setup I installed SSH on it so I could manage it as if it were a headless (i.e. no monitor attached) machine (instructions).

In addition to installing openssh, these steps also include build-essential, make, and gcc along with instructions for, but don’t worry about installing VMware Tools. The instructions are cumbersome and in parts just wrong, so skip that. The virtual machine is up and running with ssh and a good C compiler at this point, so we’re all set.

2nd – The LAMP Stack

sudo apt-get apache2

Once installed the default page will be available on the server, so navigate over to 192.168.x.x and view the page to insure it is up and running.


Next install mysql and php5 mysql.

sudo apt-get install mysql-server php5-mysql

During this installation you will be prompted for the mysql root account password. It is advisable to set one.

Then you will be asked to enter the password (the one you just set about 2 seconds ago) for the MySQL root account. Next, it will ask you if you want to change that password. Select ‘n’ so as not to create another password for the root acount since you’ve already created the password just a few seconds before.

For the rest of the questions, you should simply hit the enter key for each prompt. This will accept the default values. This will remove some sample users and databases, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.

Next up is to install PHP. No grumbling, just install PHP.

sudo apt-get install php5 libapache2-mod-php5 php5-mcrypt

Next let’s open up dir.conf and change a small section to change what files apache will provide upon request. Here’s what the edit should look like.

Open up the file to edit. (in vi, to insert or edit, hit the ‘i’ button. To save hit escape and ‘:w’ and to exit vi after saving it escape and then ‘:q’. To force exit without saving hit escape and ‘:q!’)

sudo vi /etc/apache2/mods-enabled/dir.conf

This is what the file will likely look like once opened.

<IfModule mod_dir.c>
    DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm

Move the index.php file to the beginning of the DirectoryIndex list.

<IfModule mod_dir.c>
    DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm

Now restart apache so the changes will take effect.

sudo service apache2 restart

Next let’s setup some public key for authentication. On your local box complete the following.


If you don’t enter a passphrase, you will be able to use the private key for auth without entering a passphrase. If you’ve entered one, you’ll need it and the private key to log in. Securing your keys with passphrases is more secure, but either way the system is more secure this way then with basic password authentication. For this particular situation, I’m skipping the passphrase.

What is generated is id_rsa, the private key and the id_rsa.pub the public key. They’re put in a directory called .ssh of the local user.

At this point copy the public key to the remote server. On OS-X grab the easy to use ssh-copy-id script with this command.

brew install ssh-copy-id


curl -L https://raw.githubusercontent.com/beautifulcode/ssh-copy-id-for-OSX/master/install.sh | sh

Then use the script to copy the ssh key to the server.

ssh-copy-id adron@192.168.x.x

Next let’s setup some public key for authentication. On your local box complete the following.


If you don’t enter a passphrase, you will be able to use the private key for auth without entering a passphrase. If you’ve entered one, you’ll need it and the private key to log in. Securing your keys with passphrases is more secure, but either way the system is more secure this way then with basic password authentication. For this particular situation, I’m skipping the passphrase.

What is generated is id_rsa, the private key and the id_rsa.pub the public key. They’re put in a directory called .ssh of the local user.

At this point copy the public key to the remote server. On OS-X grab the easy to use ssh-copy-id script with this command.

brew install ssh-copy-id


curl -L https://raw.githubusercontent.com/beautifulcode/ssh-copy-id-for-OSX/master/install.sh | sh

Then use the script to copy the ssh key to the server.

ssh-copy-id adron@192.168.x.x

That should give you the ability to log into the machine without a password everytime. Give it a try.

Ok, so now on to the meat of this entry, Nagios itself.

Nagios Installation

Create a user and group that will be used to run the Nagios process.

sudo useradd nagios
sudo groupadd nagcmd
sudo usermod -a -G nagcmd nagios

Install these other essentials.

sudo apt-get install libgd2-xpm-dev openssl libssl-dev xinetd apache2-utils unzip

Download the source and extract it, then change into the directory.

curl -L -O https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.1.1.tar.gz
tar xvf nagios-*.tar.gz
cd nagios-*

Next run the command to configure Nagios with the appropriate user and group.

./configure --with-nagios-group=nagios --with-command-group=nagcmd

When the configuration is done you’ll see a display like this.

Creating sample config files in sample-config/ ...

*** Configuration summary for nagios 4.1.1 08-19-2015 ***:

 General Options:
        Nagios executable:  nagios
        Nagios user/group:  nagios,nagios
       Command user/group:  nagios,nagcmd
             Event Broker:  yes
        Install ${prefix}:  /usr/local/nagios
    Install ${includedir}:  /usr/local/nagios/include/nagios
                Lock file:  ${prefix}/var/nagios.lock
   Check result directory:  ${prefix}/var/spool/checkresults
           Init directory:  /etc/init.d
  Apache conf.d directory:  /etc/httpd/conf.d
             Mail program:  /bin/mail
                  Host OS:  linux-gnu
          IOBroker Method:  epoll

 Web Interface Options:
                 HTML URL:  http://localhost/nagios/
                  CGI URL:  http://localhost/nagios/cgi-bin/
 Traceroute (used by WAP):

Review the options above for accuracy.  If they look okay,
type 'make all' to compile the main program and CGIs.

Now run the following make commands. First run make all as shown.

make all

Once that runs the following will be displayed upon success. I’ve included it here as there are a few useful commands in it.

*** Compile finished ***

If the main program and CGIs compiled without any errors, you
can continue with installing Nagios as follows (type 'make'
without any arguments for a list of all possible options):

  make install
     - This installs the main program, CGIs, and HTML files

  make install-init
     - This installs the init script in /etc/init.d

  make install-commandmode
     - This installs and configures permissions on the
       directory for holding the external command file

  make install-config
     - This installs *SAMPLE* config files in /usr/local/nagios/etc
       You'll have to modify these sample files before you can
       use Nagios.  Read the HTML documentation for more info
       on doing this.  Pay particular attention to the docs on
       object configuration files, as they determine what/how
       things get monitored!

  make install-webconf
     - This installs the Apache config file for the Nagios
       web interface

  make install-exfoliation
     - This installs the Exfoliation theme for the Nagios
       web interface

  make install-classicui
     - This installs the classic theme for the Nagios
       web interface

*** Support Notes *******************************************

If you have questions about configuring or running Nagios,
please make sure that you:

     - Look at the sample config files
     - Read the documentation on the Nagios Library at:

before you post a question to one of the mailing lists.
Also make sure to include pertinent information that could
help others help you.  This might include:

     - What version of Nagios you are using
     - What version of the plugins you are using
     - Relevant snippets from your config files
     - Relevant error messages from the Nagios log file

For more information on obtaining support for Nagios, visit:




After that successfully finishes, then execute the following.

sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config
sudo /usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/sites-available/nagios.conf

Now some tinkering to setup the web server user in www-data and nagcmd group.

sudo usermod -G nagcmd www-data

Now some Nagios plugins. You can find the plugins listed for download here: http://nagios-plugins.org/download/ The following are based on the 2.1.1 release of plugins.

Change back out to the user directory on the server and download, tar, and change into the newly unzipped files.

cd ~
curl -L -O http://nagios-plugins.org/download/nagios-plugins-2.1.1.tar.gz
tar xvf nagios-plugins-*.tar.gz
cd nagios-plugins-*
./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl

Now for some ole compilation magic.

sudo make install

Now pretty much the same things for NRPE. Look here to insure that 2.15 is the latest version.

cd ~
curl -L -O http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz
tar xvf nrpe-*.tar.gz
cd nrpe-*

Then configure the NRPE bits.

./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu

Then get to making it all.

make all
sudo make install
sudo make install-xinetd
sudo make install-daemon-config

Then a little file editing.

sudo vi /etc/xinetd.d/nrpe

Edit the file for the line only_from to include the following where 192.x.x.x is the IP of the Nagios Server.

only_from = 192.x.x.x

Save the file, and restart the Nagios server service.

sudo service xinetd restart

Now begins the Nagios Server configuration. Edit the Nagios configuration file.

sudo vi /usr/local/nagios/etc/nagios.cfg

Find this line and uncomment the line.


Save it and exit.

Next creat the configuration file for the servers to monitor.

sudo mkdir /usr/local/nagios/etc/servers

Next configure the contacts config file.

sudo vi /usr/local/nagios/etc/objects/contacts.cfg

Fine this line and set the email address to one you’ll be using.

email                           adronsemail@compositecode.com

Now add a Nagios service definition for the check_nrpe command.

sudo vi /usr/local/nagios/etc/objects/commands.cfg

Add this to the end of the file.

define command{
        command_name check_nrpe
        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$

Save and exit the file.

Now a few last touches for configuration in Apache. We’ll want the Apache rewrite and cgi modules enabled.

sudo a2enmod rewrite
sudo a2enmod cgi

Now create an admin user, we’ll call them ‘nagiosadmin’.

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Create a symbolic link of nagios.conf to the sites-enabled directory and then start the Nagios server and restart apache2.

sudo ln -s /etc/apache2/sites-available/nagios.conf /etc/apache2/sites-enabled/
sudo service nagios start
sudo service apache2 restart

Enable Nagios to start on server boot (because, ya know, that’s what this server is going to be used for).

sudo ln -s /etc/init.d/nagios /etc/rcS.d/S99nagios

Now navigate to the server and you’ll be prompted to login to the web user interface.


Now begins the process of setting up servers you want to monitor… stay tuned, more to come.


Using the file system fs.* w/ Node.js

I have a few basic tasks that I need to perform on the file system with Node.js. I’m going to clone a repository into a directory, but before that, if the directory that the clone action will put the repository in exists, I’d like to do something with that directory first. To check for the existence of the directory I quickly found two functions that work well for this purpose.

First there is the asynchronously executing “exists” function.

var fs = require('fs');

fs.exists('/etc/passwd', function (exists) {
  console.log(exists ? "it's there" : 'no passwd!');

This function takes the path, passed in, then has a callback that can then react to the result which is true or false. A similar function for a synchronous call is available too, called “existsSync”.

var fs = require('fs');

var result = fs.existsSync(path)

// result will be true or false.

Now there is one major problem with these two functions. Currently, they’re marked as “deprecated” with a stability expectation of 0. This means simply, that they’re likely to go away in the near future. Now, I hate the idea of using these functions with the idea that I upgrade to a new version one day and my application is nuked. That just isn’t really acceptable.

So I go digging around at the other functions that the documentation states to use instead. For the asynchronous fs.exists(path, callback) function it states to use “fs.stat” or “fs.access” instead. Let’s take a look at these two first.

The “fs.stat” function is very similar to “fs.exists” in calling signature, it takes a path and a callback to execute. I created the short snippet below to show what this function looks like.

var fs = require('fs');

var path2 = 'thisShouldntExist';
var path1 = 'newdirectory';

function myDirectoryExistsCheck (path) {
  fs.stat(path2, function (err, stat) {
    if (err.code === 'ENOENT') {
      console.log('No file asynchronously found.')
    } else if (err) {
    } else {
      console.log('File found asynchronously - Mode: ' + stat.mode + ' Size: ' + stat.size);


Based on the first call finding a directory and the second call not finding a directory, here’s the results I got from this code snippet.

File found asynchronously - Mode: 16877 Size: 102
No file asynchronously found.

Based on the first functions, which are now deprecated, the amount of code has ballooned up pretty dramatically over making a standard function call with a true or false result, to something where I have to confirm based off of the data returned in a fs.stats object. I wrote a little console.log code just to show what this object looks like when it is printed out. This…

fs.stat(path1, function (err, stat) {

…prints out this.

{ dev: 16777220,
  mode: 16877,
  nlink: 3,
  uid: 501,
  gid: 20,
  rdev: 0,
  blksize: 4096,
  ino: 10692619,
  size: 102,
  blocks: 0,
  atime: Mon Nov 23 2015 11:23:55 GMT-0800 (PST),
  mtime: Mon Nov 23 2015 11:20:14 GMT-0800 (PST),
  ctime: Mon Nov 23 2015 11:20:14 GMT-0800 (PST),
  birthtime: Mon Nov 23 2015 11:19:40 GMT-0800 (PST) }

It’s kind of nice to have an object come back with data that one can use to derive a result from, but it’s kind of a bummer when all I want to do is write code that checks for existence and returns true or false. Now, in either case, if an error occurs, such as what happens when a file or directory is not found with the “fs.stat” function, the result looks like this. This…

fs.stat(path2, function (err, stat) {

…prints out this.

{ [Error: ENOENT: no such file or directory, stat 'thisShouldntExist']
  errno: -2,
  code: 'ENOENT',
  syscall: 'stat',
  path: 'thisShouldntExist' }

Another thing I added, to show two useful functions when there is a file or directory present is shown below.

fs.stat(path, function (err, stat) {
  if (err) {
    if (err.code === 'ENOENT') {
      console.log('No file asynchronously found.');
    } else {
      console.log('Some other error occurred.');
  } else {
    console.log('File found asynchronously - Mode: ' + stat.mode + ' Size: ' + stat.size);
    console.log('Other things to checks:');
    console.log('  - ' + stat.isFile());
    console.log('  - ' + stat.isDirectory());

This of course checks for errors, which will be thrown if there is not a file or directory to find. But if there is, the stat object that is returned asynchrounously via the callback has several functions attached, the two that are immediately useful for me are the “isFile()” and “isDirectory()” functions. They’re pretty self explanatory.

Now, of course this still means you’re writing code to deal with errors instead of what the “fs.exists” functions would do. So really, what needs to be done is a wrapper needs to be written to handle the somewhat basic and crude handling of the file system. But since this is such a common need for many applications, I thought, “hey, time to check and see if there’s some npm library bagic that’s available!” …and oddly enough I wasn’t impressed by anything I could dig up. Maybe search-fu wasn’t so hot. But it looks like I’ll be building up this stuff from scratch. With that, happy hacking and I’ll be back later with more…

Kafka/Visual Studio Code/OSS… Distibuted Consensus, Things to Learn

A Few Streams to Learn From: Apacha Kafka!

Here I am in the middle of Qcon SF, about to enjoy the “Demystifying Stream Processing with Apache Kafka” talk with Neha Narkhede @nehanarkhede. The background on this talk is rooted in Neha being a co-founder of Confluent.io, with co-founder Jay Kreps of Karka co-creation fame. Neha is providing a fundamental talk on the insight and usage of streams across distributed systems.

That second tweet was of the room before we had to move to the keynote space to make room for everybody that was interested in the topic! Holy snikies!

If you’d like to read some more information on Kafka and streaming, check out some of these posts.

I’m looking forward to digging into Kafka and various uses in the coming weeks. My current job (more on that REAL soon, and yes I said job). I’ve got some heavy data (big data just isn’t even discriptive, and I’m going with the Marty McFly terminology of “Heavy” and adding “Data” to form a more descriptive and unique term).  ;)

Visual Studio Code goes OSS & more Wicked F#!

As I’m sitting listening to Neha’s talk I see a stream (because I multi-task like a crazy person) of things getting mentioned about something something OSS and something something F# and something something Visual Studio Code. So even though we’re heavy into the middle of compaction, stream processings, discussions of queue points and how to manage so many things Kafka using a library with kstream DSL, processor API, and interfaces in a library Neha is discussing. It’s very interesting so

I’m going back and forth between what Neha is talking about, taking notes on the specific topic points I’ll need to research after her talk, and reading up on these something something OSS something something F# somethign something Visual Studio Code tweets. Then I stumbled into the rabbit hole of goodies that I was seeing…

Visual Studio Code is OSS now w/ F# Goodies!


At least, that’s my first response because this fixes my #1 complaint about Visual Studio Code. I hated that it wasn’t open source, when there was very little reason for it to be closed source. So much of it was open already, it just seemed confusingly absurd that it wasn’t open source. But here it is, wide open and ready for PRs yo!

The Marketplace for Visual Studio now has a few new goodies for F# too which is excellent!


Most of this is mentioned on the Visual Studio Code blog of course, but I’m outlining a few of the bits here, since I know not too many follow the VSC blog that read my blog – for various good reasons!  ;)

With that, I leave you with the two key tidbits that worked their way into my brain while I enjoyed learning about Kafka in Neha’s talk. Cheers!



Docker Tips n’ Tricks for Devs – Issue 0003 – Getting Rabbit MQ Running

There are a few clean ways to get started running  RabbitMQ with Docker. The easiest is of course just to grab the Kitematic UI and pull the repo and start it. However, here are the actual commands to get the Rabbit MQ started from the command line.

docker run -d -p 5672:5672 -p 15672:15672 dockerfile/rabbitmq

If you want persistent shared directories, then make sure to run the command like this.

docker run -d -p 5672:5672 -p 15672:15672 -v <log-dir>:/data/log -v <data-dir>:/data/mnesia dockerfile/rabbitmq

Now that the instance is up and running, you can get to work playing around and using RabbitMQ on your new docker container.


After 816 Days I’m Taking a Job!

The new mission or as some may call it, a job! The context for those that might not be familiar with my adventures is that I’ve been working independently as a consultant, contractor, community builder, beer drinker, hacker, teacher, trainer, mentor, curriculum builder, and training content creator. The last time I held something that resembled a job was 560 business days ago, or more specifically 816 days ago. Honestly, I’m not even sure that it could be considered a job, it was a strange gig to say the least. Recently after this long break I’m taking up a new job position with some interesting objectives and priorities.

Here’s some of what I wrote to outline the specific objectives and priorities for the new team I’m joining and to insure I had clear priorities for myself. I do, after all, prize clear objectives very high on the “things that are useful & cool” list.


  • Help launch and build the community around the release of a yet to be announced open source micro-services framework (we’re currently calling it the Forge Framework) following an open source software model. I’ll also be telling you about all the work that has gone into this framework so far form Jesse, Beau, and the team. This will cover their various battles, from discussions to decisions, all leading up to the release of the framework. At this point, our time frame to release this is somewhere around the Feburary time frame. Currently it is in production, but we will need to make sure we have a reason repository of code we can release. We’re aiming for it to be in good shape for everybody to use when it’s released. (I’ll be managing the overall effort, so ping me if you’re interested in jumping into the project)
  • Help build infrastructure for site reliability, deployment, etc (immutable, container based, etc) to deliver the company’s key products, APIs, micro-services, and improve the back-end deployment and delivery options and capabilities. This is going to include a lot of cool technology including things like Docker, kafka, CoreCLR, and a host of other things that I’ll be blogging about on a regular basis. Along with this infrastructure and site reliability I’ll help set guidelines, approaches, and future objectives for delivery and deployment of software. When I implement things, I’ll aim to blog it, when I learn new tips and tricks, I’ll aim to blog it, and whenever I break a build, I’ll blog that too. Whatever it is, I’m aiming to increase my frequency a great deal in the coming days, weeks, and months.
  • I’m looking for, scouting around like force recon, and connecting talent to future work we will be having come open in early 2016. (Again, this is where I get to come and hack with you, help build awesome open source software, and let you my fellow coding cohort know about the company’s existing and upcoming awesome work we’ll be hiring for! For those that know me, you know I’m serious about making sure I line up the right people with the right types of gigs, I’m no recruiter, I’m a coder, so I fight against wildly innappropriate misalignment and related silliness!)

These are my top priorities as I step into this role with the Quote Center, a kind of laboratory of inventive ideas for Home Depot. You’ll be hearing a lot more about this in the coming days, and if you’re interested in working with me and an awesome group of people – reach out and let me know @Adron on Twitter or just email me. Cheers!

DevDay 2015, Inspiration, and a quick look back…

So far this year, which is obviously nowhere near finished yet, I have had some amazing experiences. From .NET Fringe, Polyglot 2015, Progressive .NET Tutorials 2015, to Dev Day 2015 and more. I decided to add a little bit more of a personal note in this blog entry because of inspiration I just got from Michał Śliwoń (@mihcall) on his Dev Day 2015 Aftermath write up.

Just as Michał writes,

“Inspiration is like a spark. It can be one brilliant presentation at the conference, one sentence at some session, one hallway conversation with another attendee and I’m excited, coming back with a head full of new ideas. Every conference has this little spark”

and I completely agree. At .NET Fringe I got back into a few things on the .NET CLR stack, namely F# and a little toying around with Akka .NET and micro-services using those technologies. I also had a hand in organizing and the origins of the conference, which I wrote about. At Polyglot 2015 my desire increased to become more familiar with and comfortable with functional programming languages. At the Progressive .NET Tutorials I was again inspired to dive deeper into functional languages and take a look more closely at everything from Weave and other container and virtualization based systems.

Thrashing Code News

One things that this led me to, is to start putting together a list of people who are interested in these types of conferences. I’m talking about the really down to earth, nitty gritty, get into the weeds of the technology, and meet the people building and using that technology everyday conferences. This list, you can sign up for here and do read the article just below the sign up page, as this is NOT some spam list. I’ll be putting in real effort and time to put together good content when the list officially kicks off! I will blog about, and of course get that first email out about Thrashing Code News in the coming months.

Again at Dev Day I was also inspired by many people and got to meet many people. Which leads me to the number one thing that makes these conferences absolutely great. It’s all about the people who attend.

The People

I got to meet Rob Conery (@robconery / http://rob.conery.io/). We hung out, had beers, talked shop, talked surfing, talked tech and training screencast, discussed future bad ass conferences (again, sign up to my list and I’ll keep you abreast of any mischievious conference Rob & I dive into) and tons more. It was seriously kick ass to meet Rob, especially after not getting a chance to at what must have been a gazillion conferences he and I have both been at before!

I finally met Christian Heilmann (@codepo8) who I also think we must have both been at a gazillion of the same conferences and somehow managed to not meet each other. Good conversation, talk of Seattle, other devlish code happy things – and hopefully a beer or two to be had with good Christian in the near future in Seattle or Portland (or thereabouts).

I had the fortune of running into Alena Dzenisenka (@lenadroid) again doing what she does, which is tell, teach, and show people a whole of awesome F# handiwork. For instance at Dev Day she was throwing down some machine learning math and helping to get people started. She’s also got some talks lined up near the Cascadian (that’s Seattle and Portland, but also San Francisco and Dallas!!) lands if you haven’t noticed, so come get inspired to sling some functional code!

On day one the keynote by Chad Fowler (@chadfowler) was excellent. I’d not realized he was a fellow who escaped the south like I had all while playing a bunch of music! I was able to catch Chad and chat a bit on day two of the conference. His presentation was great, and he’s motivated me to give his book Passionate Programmer a read.

Another individual who I’d been aiming to meet, Mathias Brandewinder (@brandewinder), was also at the conference. I even attended some of his workshop and learned a number of things about F# and machine learning. I’m definitely inspired to dig deeper into many of the machine learning realm and start figuring out more of the truly amazing things we can do with computers and machine learning algorithms – I honestly feel like we’ve only skimmed the surface for much of this technology. Mathias also has a book, that is truly worth buying titled “Machine Learning Projects for .NET Developers“. If you’re curious, yes, I have the book and am working through it steadily!  :)

Gary Short (@garyshort) provided an amazing talk on digging into crop yields via the European Space Agency Data Science Project. I also enjoyed the multiple conversations that I was able to have with Gary from the talk of “really really really awesomely excited Americans” vs. “excited Americans” all the way to the talks on the matter of data science and crop yields themselves. Gary’s talk is linked below, so get a dose of the crop yields yourself, and any complaints be sure to send to his @robashton twitter account!  (But seriously, you should follow Rob Ashton too as he’s got a lot of good twitter nuggets).

Another person I was super stoked to run into again is Tomas Petricek (who I hear might be in the Cascadian lands of the Seattle area in a month or so). I met Tomas at Progressive .NET Tutorials in London and enjoyed a number of good conversations, and his general awesome personality and hilarious demeanor! Not sure I mentioned, but he’s got some wicked F# chops too. He spoke about Understanding the World with Type Providers, which is something that you should watch as it’s an interesting way to wrap one’s mind around a lot of ideas.

I also, after many random conversations about a whole host of conversation in Functional Programming Slack (follow the link to sign up) chats, got to meet Krzysztof Cieślak (@k_cieslak). Krzysztof (and if you can’t pronounce his name just keep trying, you’ll get it right sometime around 2023) was great to meet and catch up with in person. Also great to hear tidbits about what he’s working on since he’s driving some really cool projects, including working on projects like Ionide Project for the Atom Editor.

There are so many people I enjoyed chatting with and getting to meet, which I really wish I had more time to hang out and chat or hack with everybody more. I met so many other individuals, that I already feel like a prick for not being able to write something about every single awesome person I’ve had a chance to speak with at Dev Day and the subsequent days after the conference. To those I didn’t, sorry about that, drinks and dinner are on me when you’re in Portland!

…on that note, get subscribed to Thrashing Code News so I can update you when the rumblings and dates of the next kick ass conferences, hackathons, hacking festivals, or other great materials, learnings, or such come up. In addition, get inspired to speak, or get involved in some way and help make the next conference you attend as kick ass as you’d want it to be! It’s easy, just fill out your name and email here.

…and to Michał and Rafał I’ll be following up with you guys on some of my next confrence efforts coming up in the Cascadian Pacific Northwest (i.e. Seattle/Portland area)! Cheers!