Researching & Learning About Zookeeper: A Guide

I’ve started working with Zookeeper. Since I’ve started doing that I’ve put together this blog post. It’s aim is to provide a structured approach to learning Zookeeper and researching the elements that make its features tick. Along the way I have a few call outs to people that have also provided excellent talks, material, or contributions to learning about Zookeeper along the way. With that, let’s get started.

Zookeeper is a consensus system written based on ideas presented via consensus algorithms. The idea is key value stores that keep all of their data in memory for read heavy workloads. The qualities in this context present a system that is highly consistent, with intent for access from distributed systems to data that won’t be lost.

Start Learning

The starting point should be a complete read of the Apache Zookeeper Project Home Page.

At this point I took an administrators’ angle on determining what I needed to know and do next. I knew that my situation would meet the basic assumptions of reliability around Zookeeper; First is that only a minority of servers in a deployment would fail at a particular time or become inaccessible from a crash, partition, or related issue, and second is that deployed machines would have correctly operating clocks, storage, and network components that perform consistently.

I had also made an assumption that I would need 2 x F + 1 machines in order to maintain data consistency. The F here is the number of failed or inaccessible machines. This meant that if I wanted to have 2 failures, I’d need at least 5 machines. For a failure of up to 3 machines, that would be 7 machines. Pretty easy, just a little simple math.

The other thing I was curious about, especially on a single machine, would be Zookeeper’s overall overhead. Would it come into contention with the services that are already running? Would it be ok to put Zookeeper on the machines that run the micro-services that Zookeeper is providing information to? Well, Zookeeper does indeed content with other application for CPU, network, memory, and storage. For this reason I have to balance the deployment of Zookeeper in relation to the other applications, as my server loads may not be super high, and thus I’d be able to have Zookeeper on some of the servers that have actual other services deployed. But YMMV depending on your services you’ve got deployed.

While I was thinking through how I’d build out the architecture for my implementation of Zookeeper I came upon a very important note in the documentation,

“ZooKeeper’s transaction log must be on a dedicated device. (A dedicated partition is not enough.) ZooKeeper writes the log sequentially, without seeking Sharing your log device with other processes can cause seeks and contention, which in turn can cause multi-second delays.

Do not put ZooKeeper in a situation that can cause a swap. In order for ZooKeeper to function with any sort of timeliness, it simply cannot be allowed to swap. Therefore, make certain that the maximum heap size given to ZooKeeper is not bigger than the amount of real memory available to ZooKeeper. For more on this, see Things to Avoid below.”

After reading up on the following documentation it seemed like a good time to do a test deployment:

BEGIN BUG DESCRIPTION: 1ST DOCKER ATTEMPT

NOTE: If you just want to get to the Zookeeper installation & setup and skip this issue, GOTO here.

My first go was to pull up a clean Ubuntu docker image and prep it as a container. Then start installing the necessary parts of Zookeeper. These steps consisted of the following. I made a video for it (see toward the bottom of this entry), so you can actually see the flow and I also wrote the commands I’m tying in bash below. Then you can pick your preferred use.

docker-machine start fusion-fire

Docker machine starts my virtual machine on OS-X that runs the Docker daemon, which I’ve named fusion-fire, thus the command above. Then after that I pulled down an Ubuntu image, started a container from the image, connecting to the container and all set for installation.

docker pull ubuntu
docker run -it ubuntu

To install the Zookeeper server and begin execution I then issued the following.

sudo apt-get update
sudo apt-get -Y install default-jdk

While this was executing I also ran into a situation where the Java Development Kit was hanging on getting the certificates put into place.

I began looking into this problem, and found currently on Ubuntu 14.04, running sudo-apt-get update and then running the install will trigger the bug. Two other points of reference are herehere, and there are other postings and issues related to the issue, just google. So what I did at that point to fix this issue was the following.

First I forcefully killed the docker container by just restarting the whole docker VM.

docker-machine stop fusion-fire
docker-machine start fusion-fire

Once that stopped I then started the virtual machine again.

sudo apt-get -y install default-jre

When it started I ran sudo apt-get install again. At that point apt-get attempted to recover but the install kept getting stuck on registering the certificates. So I gave up on this avenue for now. Hopefully a future Docker & Linux Kernal fixes the problem. So instead I went out and just spooled up some AWS instances for now, I’ll update this blog entry with a “Part II: Docker is Zookeeper Fixed” when the Java + Linux Kernal + Docker issue is remedied, until then, here’s the installation process on the AWS instances.

END BUG DESCRIPTION: AWS Instance Zookeeper Installation

Once this was setup I started 5 nano instances for Zookeeper (nano, since it’s just a test example for learning) and then logged in using broadcast with iTerm 2. From there each instance had the following commands executed.

sudo apt-get update
sudo apt-get install -y default-jdk
cd /opt/
sudo mkdir zookeeper
cd zookeeper/
sudo wget http://mirror.tcpdiag.net/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
sudo tar -zxvf zookeeper-3.4.6.tar.gz
cd conf/
sudo nano zoo.cfg

NOTE: Nano is the text editor I used above for “sudo nano zoo.cfg”. If you don’t have it available just install it with “sudo apt-get install nano”.

In that zoo.cfg I entered the following. For the IPs I actually used the AWS private IPs for the config file example below.

tickTime=2000
dataDir=/var/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=172.31.19.66:2888:3888
server.2=172.31.19.67:2888:3888
server.3=172.31.19.68:2888:3888
server.4=172.31.19.69:2888:3888
server.5=172.31.19.70:2888:3888

Now I started the service using the zkServer.sh script file.

sudo /opt/zookeeper/zookeeper-3.4.6/bin/zkServer.sh start-foreground

When I booted up I ran into an error about the myid file, so I added the file with a sequential number for the byid in the /var/zookeeper directory.

sudo nano /var/zookeeper/myid

In each of the files I added a number, respectively 1 through 5 for the id of each and saved those files. Upon attempting to start the zookeeper service with the following command I finally got to see the various nodes in the ensemble gain access to each other and start working. Which, I gotta admit, was a pretty damn cool feeling.

After all that fussing it seemed good to note, especially since they’re hard to find in the documentation (which is kind of a bit hard to use), here are some of the switches for zkServer.sh.

start
start-foreground (super useful for debugging)
stop
restart
status
upgrade
print-cmd

Once this is done, restart the service but this time instead of using the start-foreground command, just use the start command and that will start the service and return the bash back to you to issues commands or whatnot. An easy way to test out Zookeeper now that it is running is to use the Zookeeper CLI. This is the zkCli.sh shell script (or zkcli.bat file if you’re running it on windows – which I’d strongly suggest NOT to do).

Ok, that’s it for this entry. More to come in the near future. Cheers!

Excellent Additional References

All That Tech… SITREP: Elastic Meetup & Quote Center Updates

QC_377x285I started working with the Quote Center (QC) back in November, and wrote about it in “After 816 Days I’m Taking a Job!” Now that I’m a few months into the effort, it’s sitrep time. Sitrep, btw is military speak for Situational Report.

The three core priorities I have at Quote Center in my role are: Community Contributions, Site Reliability, and Talent Recon.

Community Contributions (& Organizing)

Some of the progress I’ve made, is direct and immediate involvement with some really interesting groups here in Portland. The first seemed a prime option, and that’s the Elastic User Group.

Myself and some of the QC Team traveled late last year to check out the Elasticon Tour stop in Seattle. It was an educational experience where I got some of my first introductions to Elasticsearch and also a new product Elastic had just released recently called Beats. I was fairly impressed by what I saw and several other things aligned perfectly for follow up community involvement after that.

I’ve since kept in touch with the Elastic Team and started coordinating the Elastic User Group in Portland (Join the group on Meetup for future meetings & content). In March the group will be hosting a great meetup from Ward & Jason…

Kafka, Java, Ruby, React, and Elasticsearch w/ Ward Cunningham and Jason Clark

Monday, Mar 28, 2016, 6:30 PM

Lucky Lab
915 SE Hawthorne Blvd. Portland, OR

13 Elastic Portlandians Attending

New Relic receives tons of metrics. Large customers can report thousands of uniquely named metrics per minute, but they want to search and chart them in nearly realtime.We’ve turned to Elasticsearch on this problem, tuning it for this write-heavy workload. With small, frequently duplicated documents, it’s been an interesting challenge to optimize …

Check out this Meetup →

So be sure to RSVP for that meetup as it’s looking to be a really interesting presentation.

The second group I’ve stepped up to help out with is the Docker Meetup here in Portland. The first meetup we have planned at this time is from Casey West.

How Platforms Work

Wednesday, Mar 16, 2016, 6:30 PM

New Relic
111 Southwest 5th Avenue #2800 Portland, OR

48 Dockers Attending

Platforms: either you have one or you’re building one. Over the years I’ve observed six high-level characteristics common to production environments which are operationally mature. This talk will explain in detail the six capabilities in an operationally mature production environment. I will also demo these capabilities live.Working in Internet i…

Check out this Meetup →

Site Reliability

One of the other priorities I’ve been focusing on is standard site reliability. Everything from automation to continuous integration and deployment. I’ve been making progress, albeit at this stage going from zero to something, in the space of a site reliability practice takes time. I’ve achieved a few good milestones however, which will help build upon the next steps of the progress.

We’ve started to slowly streamline and change our practice around Rackspace and AWS Usage. This is a very good thing as we move toward a faster paced continuous integration process around our various projects. At this time it’s a wide mixture of .NET Solutions that we’re moving toward .NET Core. At the same time there are some Node.js and other project stacks that we’re adding to our build server process.

Team City

Our build server at this time is shaping up to be Team City. We have some build processes that are running in Jenkins, but those are being moved off and onto a TeamCity Server for a number of reasons. I’m going to outline these reasons and I’m happy to hear any reasons there may be other better options. So feel free to throw a tweet at me or leave a comment or three.

  1. Jetbrains has a pretty solid and reliable product in Team City. It tends to be cohesive in building the core types of applications that we would prospectively have: Java, .NET, Node.js, C/C++ and a few others. That makes it easy to get all projects onto one build server type.
  2. TeamCity has intelligence about what is and isn’t available for Java & .NET, enabling various package management and other capabilities without extensive scripting or extra coding needed. There are numerous plugins to help with these capabilities also.
  3. TeamCity has fairly solid, quick, and informative support.

Those are my top reasons at this point. Another reason, which isn’t really something I felt should be enumerated, because it’s a feeling versus something I’ve confirmed. That is, the Jenkins Community honestly feels a bit haphazard and disconnected. Maybe I’m just not asking or haven’t seen the right forums to read or something, but I’ve found it a frustrating experience to deal with the Jenkins Server and find information and help regarding getting a disparate and wide ranging set of tech stacks building on it. TeamCity has always just been easy, and getting some continuous integration going the easiest way possible is very appealing.

Monitoring

We use a number of resources for monitoring of our systems. New Relic is one of them, and they’re great, however it’s a bit tough when things are locked down inside of a closed (physically closed) network. How does one monitor those systems and the respective network? Well, you get Nagios or something of the sort installed and running.

I installed it, but Nagios left me with another one of those dirty feelings like I just spilled a bunch of sour milk everywhere. I went about cleaning up the Nagios mess I’d made and, upon attending the aforementioned Elasticon Tour Stop in Seattle, decided to give Beats a try. After a solid couple weeks of testing and confirming the various things work well and would work well for our specific needs, I went about deploying Beats among our systems.

So far, albeit only being a few weeks into using Beats (and still learning how to actually make reports in Kibana) Beats appears to have been a good decision. Dramatically more cohesive and not spastically splintered all over the place like Nagios. I’m already looking into adding additional Beats beyond the known three; Topbeats, Packetbeats, and Filebeats. There are a number of other beats that we could add specific to our needs, that would be good open source projects. Stay tuned for those, I’ll talk about them in this space and get a release out to all as soon as we lay a single line of code for those.

Talent Recon

Currently, nothing to report, but more to come in the space of talent recon.

Elasticon Tour 2015 in Seattle

Today is the tour stop of the Elasticon Tour that swung into Seattle. Myself and some of the Home Depot Quote Center team headed up via the Geek Train for the event. We arrived the night before so we could get up and actually be awake and ready for the event.

Just to note, a good clean place to stay, that isn’t overpriced like most of Seattle is the Pioneer Square Hotel – usually about $110-120 bucks. If you’re in town for a conference, sometimes it’s even worth skipping the “preferred hotels” and staying here. But I digress…

When the team and I walked in we waited a little bit for registration to get started. We stood around and chatted with some of our other cohort. Once the registration did open, we strolled into the main public space and started checking out some demos.

StreamSets

The first thing I noticed of the demos is something that’s catching a lot of attention. It’s a partner of Elastic’s called StreamSets.

From what I could figure out from just watching the demo is that StreamSets is a ingest engine. That’s simple enough to determine just taking a look at their site. But being able to watch the demo also enlightened me to the way the interface IDE (the thing in the dark pictures above) worked.

The IDE provided ways to connect to ingestion data with minimal schema and actually start to flow the ingestion of this data through the engine. One of the key things that caught my attention at this point was the tie in with Kafka and Hadoop with the respective ingest and egress of data to sources ranging from AWS S3 to things like Elastic’s engine or other various sources that I’ll be working with in the coming months.

For more information about StreamSets here are a few other solid articles:

…and connect to keep up with what StreamSets is doing:

…and install instructions:

…and most importantly, the code:

Beats (Not the Dumb Lousy Headphones)

packetbeat-fish-and-clusterRecently I installed Nagios as I will be doing a lot of systems monitoring, management, and general devops style work in the coming weeks to build out solid site reliability. Nagios will theoretically do a lot of the things I need it to do, but then I stumbled into the recently released Beats by Elastic Search (not by Dre, see above links in the title).

I won’t even try to explain Beats, because it is super straight forward. I do suggest checking out the site if you’re even slightly interested, but if you just want the quick lowdown, here’s a quote that basically summarizes the tool.

“Beats is the platform for building lightweight, open source data shippers for many types of operational data you want to enrich with Logstash, search and analyze in Elasticsearch, and visualize in Kibana. Whether you’re interested in log files, infrastructure metrics, network packets, or any other type of data, Beats serves as the foundation for keeping a beat on your data.”

So there ya go, something that collects a ton – if not almost all of – the data that I need to manage and monitor the infrastructure, platforms, network, and more that I’m responsible for. I’m currently diving in, but here’s a few key good bits about Beats that I’m excited to check out.

packetbeat-fish-nodes-bkgd.png#1 – PacketBeat

This is the realtime network packet analyzer that integrates with Elasticsearch and provides the respective analytics you’d expect. It gives a level of visibility with Beats between all the network servers and such that will prospectively give me insight to were our series of tubes or getting clogged up. I’m looking forward to seeing our requests mapped up with our responses!  ;)

#2 – FileBeat

This is a log data shipper based on the Logstash-Forwarder. At least it was at one point, it appears to look like it is less and less based on it. This beat monitors log directories for log files, tails the fails, and forwards them to Logstash. This completes another important part of what I need to systemically monitor within our systems.

Random fascinating observations:

  • Did I mention Beats is written in Go? Furtherering Derek’s tweet from 2012!  ;)

  • Beats has a cool logo, and the design of the tooling is actually solid, as if someone cared about how one would interact with the tools. I’ll see how this holds up as I implement a sample implementation of things with Beats & the various data collectors.

More References & Reading Material for Beats:

That’s it for the highlights so far. If anything else catches my eye this evening at the Elasticon Tour, I’ll get started rambling about it too!

Nagios and Ubuntu 64-bit 14.04 LTS Setup & Configuration

1st – The Virtual Machine

First I created a virtual machine for use with VMware Fusion on OS-X. Once I got a nice clean Ubuntu 14.04 image setup I installed SSH on it so I could manage it as if it were a headless (i.e. no monitor attached) machine (instructions).

In addition to installing openssh, these steps also include build-essential, make, and gcc along with instructions for, but don’t worry about installing VMware Tools. The instructions are cumbersome and in parts just wrong, so skip that. The virtual machine is up and running with ssh and a good C compiler at this point, so we’re all set.

2nd – The LAMP Stack

sudo apt-get install apache2

Once installed the default page will be available on the server, so navigate over to 192.168.x.x and view the page to insure it is up and running.

0BF1B44B-F9D4-44B3-967E-BF9F98AAB2BC

Next install mysql and php5 mysql.

sudo apt-get install mysql-server php5-mysql

During this installation you will be prompted for the mysql root account password. It is advisable to set one.

Then you will be asked to enter the password (the one you just set about 2 seconds ago) for the MySQL root account. Next, it will ask you if you want to change that password. Select ‘n’ so as not to create another password for the root acount since you’ve already created the password just a few seconds before.

For the rest of the questions, you should simply hit the enter key for each prompt. This will accept the default values. This will remove some sample users and databases, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.

Next up is to install PHP. No grumbling, just install PHP.

sudo apt-get install php5 libapache2-mod-php5 php5-mcrypt

Next let’s open up dir.conf and change a small section to change what files apache will provide upon request. Here’s what the edit should look like.

Open up the file to edit. (in vi, to insert or edit, hit the ‘i’ button. To save hit escape and ‘:w’ and to exit vi after saving it escape and then ‘:q’. To force exit without saving hit escape and ‘:q!’)

sudo vi /etc/apache2/mods-enabled/dir.conf

This is what the file will likely look like once opened.

<IfModule mod_dir.c>
DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
</IfModule>

Move the index.php file to the beginning of the DirectoryIndex list.

<IfModule mod_dir.c>
DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</IfModule>

Now restart apache so the changes will take effect.

sudo service apache2 restart

Next let’s setup some public key for authentication. On your local box complete the following.

ssh-keygen

If you don’t enter a passphrase, you will be able to use the private key for auth without entering a passphrase. If you’ve entered one, you’ll need it and the private key to log in. Securing your keys with passphrases is more secure, but either way the system is more secure this way then with basic password authentication. For this particular situation, I’m skipping the passphrase.

What is generated is id_rsa, the private key and the id_rsa.pub the public key. They’re put in a directory called .ssh of the local user.

At this point copy the public key to the remote server. On OS-X grab the easy to use ssh-copy-id script with this command.

brew install ssh-copy-id

or

curl -L https://raw.githubusercontent.com/beautifulcode/ssh-copy-id-for-OSX/master/install.sh | sh

Then use the script to copy the ssh key to the server.

ssh-copy-id adron@192.168.x.x

Next let’s setup some public key for authentication. On your local box complete the following.

ssh-keygen

That should give you the ability to log into the machine without a password everytime. Give it a try.

Ok, so now on to the meat of this entry, Nagios itself.

Nagios Installation

Create a user and group that will be used to run the Nagios process.

sudo useradd nagios
sudo groupadd nagcmd
sudo usermod -a -G nagcmd nagios

Install these other essentials.

sudo apt-get install libgd2-xpm-dev openssl libssl-dev xinetd apache2-utils unzip

Download the source and extract it, then change into the directory.

curl -L -O https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.1.1.tar.gz
tar xvf nagios-*.tar.gz
cd nagios-*

Next run the command to configure Nagios with the appropriate user and group.

./configure --with-nagios-group=nagios --with-command-group=nagcmd

When the configuration is done you’ll see a display like this.

Creating sample config files in sample-config/ ...

*** Configuration summary for nagios 4.1.1 08-19-2015 ***:

General Options:
-------------------------
Nagios executable: nagios
Nagios user/group: nagios,nagios
Command user/group: nagios,nagcmd
Event Broker: yes
Install ${prefix}: /usr/local/nagios
Install ${includedir}: /usr/local/nagios/include/nagios
Lock file: ${prefix}/var/nagios.lock
Check result directory: ${prefix}/var/spool/checkresults
Init directory: /etc/init.d
Apache conf.d directory: /etc/httpd/conf.d
Mail program: /bin/mail
Host OS: linux-gnu
IOBroker Method: epoll

Web Interface Options:
------------------------
HTML URL: http://localhost/nagios/
CGI URL: http://localhost/nagios/cgi-bin/
Traceroute (used by WAP):

Review the options above for accuracy. If they look okay,
type 'make all' to compile the main program and CGIs.

Now run the following make commands. First run make all as shown.

make all

Once that runs the following will be displayed upon success. I’ve included it here as there are a few useful commands in it.

*** Compile finished ***

If the main program and CGIs compiled without any errors, you
can continue with installing Nagios as follows (type 'make'
without any arguments for a list of all possible options):

make install
- This installs the main program, CGIs, and HTML files

make install-init
- This installs the init script in /etc/init.d

make install-commandmode
- This installs and configures permissions on the
directory for holding the external command file

make install-config
- This installs *SAMPLE* config files in /usr/local/nagios/etc
You'll have to modify these sample files before you can
use Nagios. Read the HTML documentation for more info
on doing this. Pay particular attention to the docs on
object configuration files, as they determine what/how
things get monitored!

make install-webconf
- This installs the Apache config file for the Nagios
web interface

make install-exfoliation
- This installs the Exfoliation theme for the Nagios
web interface

make install-classicui
- This installs the classic theme for the Nagios
web interface

*** Support Notes *******************************************

If you have questions about configuring or running Nagios,
please make sure that you:

- Look at the sample config files
- Read the documentation on the Nagios Library at:
https://library.nagios.com

before you post a question to one of the mailing lists.
Also make sure to include pertinent information that could
help others help you. This might include:

- What version of Nagios you are using
- What version of the plugins you are using
- Relevant snippets from your config files
- Relevant error messages from the Nagios log file

For more information on obtaining support for Nagios, visit:

https://support.nagios.com

*************************************************************

Enjoy.

After that successfully finishes, then execute the following.

sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config
sudo /usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/sites-available/nagios.conf

Now some tinkering to setup the web server user in www-data and nagcmd group.

sudo usermod -G nagcmd www-data

Now some Nagios plugins. You can find the plugins listed for download here: http://nagios-plugins.org/download/ The following are based on the 2.1.1 release of plugins.

Change back out to the user directory on the server and download, tar, and change into the newly unzipped files.

cd ~
curl -L -O http://nagios-plugins.org/download/nagios-plugins-2.1.1.tar.gz
tar xvf nagios-plugins-*.tar.gz
cd nagios-plugins-*
./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl

Now for some ole compilation magic.

make
sudo make install

Now pretty much the same things for NRPE. Look here to insure that 2.15 is the latest version.

cd ~
curl -L -O http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz
tar xvf nrpe-*.tar.gz
cd nrpe-*

Then configure the NRPE bits.

./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu

Then get to making it all.

make all
sudo make install
sudo make install-xinetd
sudo make install-daemon-config

Then a little file editing.

sudo vi /etc/xinetd.d/nrpe

Edit the file for the line only_from to include the following where 192.x.x.x is the IP of the Nagios Server.

only_from = 127.0.0.1 192.x.x.x

Save the file, and restart the Nagios server service.

sudo service xinetd restart

Now begins the Nagios Server configuration. Edit the Nagios configuration file.

sudo vi /usr/local/nagios/etc/nagios.cfg

Find this line and uncomment the line.

#cfg_dir=/usr/local/nagios/etc/servers

Save it and exit.

Next creat the configuration file for the servers to monitor.

sudo mkdir /usr/local/nagios/etc/servers

Next configure the contacts config file.

sudo vi /usr/local/nagios/etc/objects/contacts.cfg

Fine this line and set the email address to one you’ll be using.

email adronsemail@compositecode.com

Now add a Nagios service definition for the check_nrpe command.

sudo vi /usr/local/nagios/etc/objects/commands.cfg

Add this to the end of the file.

define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}

Save and exit the file.

Now a few last touches for configuration in Apache. We’ll want the Apache rewrite and cgi modules enabled.

sudo a2enmod rewrite
sudo a2enmod cgi

Now create an admin user, we’ll call them ‘nagiosadmin’.

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Create a symbolic link of nagios.conf to the sites-enabled directory and then start the Nagios server and restart apache2.

sudo ln -s /etc/apache2/sites-available/nagios.conf /etc/apache2/sites-enabled/
sudo service nagios start
sudo service apache2 restart

Enable Nagios to start on server boot (because, ya know, that’s what this server is going to be used for).

sudo ln -s /etc/init.d/nagios /etc/rcS.d/S99nagios

Now navigate to the server and you’ll be prompted to login to the web user interface.

nagioslogin

Now begins the process of setting up servers you want to monitor… stay tuned, more to come.

Kafka/Visual Studio Code/OSS… Distibuted Consensus, Things to Learn

A Few Streams to Learn From: Apacha Kafka!

Here I am in the middle of Qcon SF, about to enjoy the “Demystifying Stream Processing with Apache Kafka” talk with Neha Narkhede @nehanarkhede. The background on this talk is rooted in Neha being a co-founder of Confluent.io, with co-founder Jay Kreps of Karka co-creation fame. Neha is providing a fundamental talk on the insight and usage of streams across distributed systems.

That second tweet was of the room before we had to move to the keynote space to make room for everybody that was interested in the topic! Holy snikies!

If you’d like to read some more information on Kafka and streaming, check out some of these posts.

I’m looking forward to digging into Kafka and various uses in the coming weeks. My current job (more on that REAL soon, and yes I said job). I’ve got some heavy data (big data just isn’t even discriptive, and I’m going with the Marty McFly terminology of “Heavy” and adding “Data” to form a more descriptive and unique term).  ;)

Visual Studio Code goes OSS & more Wicked F#!

As I’m sitting listening to Neha’s talk I see a stream (because I multi-task like a crazy person) of things getting mentioned about something something OSS and something something F# and something something Visual Studio Code. So even though we’re heavy into the middle of compaction, stream processings, discussions of queue points and how to manage so many things Kafka using a library with kstream DSL, processor API, and interfaces in a library Neha is discussing. It’s very interesting so

I’m going back and forth between what Neha is talking about, taking notes on the specific topic points I’ll need to research after her talk, and reading up on these something something OSS something something F# somethign something Visual Studio Code tweets. Then I stumbled into the rabbit hole of goodies that I was seeing…

Visual Studio Code is OSS now w/ F# Goodies!

HELL YEAH!

At least, that’s my first response because this fixes my #1 complaint about Visual Studio Code. I hated that it wasn’t open source, when there was very little reason for it to be closed source. So much of it was open already, it just seemed confusingly absurd that it wasn’t open source. But here it is, wide open and ready for PRs yo!

The Marketplace for Visual Studio now has a few new goodies for F# too which is excellent!

ionide

Most of this is mentioned on the Visual Studio Code blog of course, but I’m outlining a few of the bits here, since I know not too many follow the VSC blog that read my blog – for various good reasons!  ;)

With that, I leave you with the two key tidbits that worked their way into my brain while I enjoyed learning about Kafka in Neha’s talk. Cheers!

 

 

A Small Rant About Being IDE-Dependent

{Sort of a Rant}

I recently saw this tweet.

I responded with this tweet.

Now I need to describe some context around this response real quick. Here are the key points behind this tweet from my context in the software development industry.

  1. I’m not a .NET Developer. I’m a software developer, or application programming, coder, hacker or more accurately I’m a solutions developer. I don’t tie myself to one stack. I’ve developed with C# & VB.NET with Visual Studio. I’ve done C++ and VB 4. I’ve done VBA and all sorts of other things in the Microsoft space on Windows and for the web with Windows technology. About 5 years ago I completely dropped that operating system dependency and have been free of it for those 5 years. I doubt I ever want to couple an application up to that monstrous operating system again, it is far to limiting and has no significant selling point anymore.
  2. In addition to the .NET code I’ve written over the years I’ve also built a few dozen Erlang Applications (wish I’d known about Elixir at the time), built a ton of Node.js based API and web applications, and have even done some Ruby on Rails and Sinatra. These languages and tooling stacks really introduced me to reality I longed for. A clean, fast, understandable reality. A reality with configuration files that are readable and convention that gives me real power to get things done and move on to the next business need. These stacks gave me the ability to focus more on code and business and research value and less on building the tool stack up just to get one single deployment out the door. Since, I’ve not looked back, because the heavy handed stacks of legacy Java, .NET, and related things have only laid heavily on small business and startups trying to add business value. In the lands I live in, that is San Francisco to Vancouver BC and the cities in between, Java and .NET can generally take a hike. Because businesses have things to do, namely make money or commit to getting research done and not piddling around building a tool stack up.

So how was this new reality created? It isn’t a new one, really the new reality was this heavy handed IDE universe of fat editors that baby a developer through every step. They create laziness in thinking and do NOT encourage a lean, efficient, simple way to get an application built and running. It doesn’t have to be this way. A small amount of developer discipline, or dare I say craftsmanship, and most of these fat IDEs and tightly coupled projects with massive XML files for configuration in their unreadable catastrophofuck of spaghetti can just go away.

Atom, Sublime, and other editors before that harnessed this clean, lean, and fast approach to software development. A Node.js Project only needs one file and one command to start a project.

npm init

A Ruby on Rails Project is about the same.

rails new path/to/your/new/application

That’s it, and BOOM you’ve got an application project base to start from.

Beyond that however, using a tool like Yeoman opens up a very Unix style way of doing something. Yeoman has a purpose, to build scaffolding for applications. That is its job and it does that job well. It has a pluggable architecture to enable you to build all sorts of scaffold packaging that you want. Hell, you can even create projects with the bloated and unreadable XML blight that pervades many project types out there from enterprisey platforms.

Take a look at Yeoman, it is well worth a look. This is a tool that allows us to keep our tooling, what we do use, loosely coupled so we don’t have a massive bloated (5+ GB) installation of nonsense to put on our machines. Anybody on any platform can load up Yeoman, and grab an editor like Atom, Submlime, Visual Studio Code (not to be confused with the bloated Visual Studio) and just start coding!

Take a look at some of the generators that yeoman will build projects for you with -> http://yeoman.io/generators/ There are over 1500! No more need to tightly couple this into an IDE. Just provide a yeoman plugin in your editor of choice. Boom, all the features you need without the tight coupling in editor! Win, win, win, win, and win!

As a bit more of my rant about bloated editors, and don’t get me wrong, I love some of the features of the largesse of some like Visual Studio and WebStorm. But one huge rant I have is the absolutely zero motivation or vested interest that a community has to add features, bug fix, or do anything in relation to editors like Visual Studio or WebStorm. ZERO reason to help, only to file complaints or maybe file a bug complaint. That’s cool, I’m glad that the editors exist but this model isn’t the future. It’s slowly dying and having tightly coupled, slow, massively bloated editors isn’t going to be sustainable. They don’t adapt to new stacks, languages, or even existing ones.

Meanwhile, Atom, Sublime, and even Visual Studio Code and other lean editors have plugin and various adaptar patterns that allow one to add support for new languages, stacks, frameworks, and related tooling and not have tight coupling to that thing. This gives these editors the ability to add features and capabilities for stacks at a dramatically faster rate than the traditional fat IDEs.

This is the same idea toward tooling that is used in patterns that a software developer ought to know. Think of patterns like seperation of concerns and loose coupling, don’t tie yourself into an untenable and frustrating toolchain. Keep it loosely coupled and I promise, your life will get easier and that evening beer will come a little bit sooner.

{Sort of a Rant::End}

…and this is why I say, do NOT tie project creation into an editor. Instead keep things loosely coupled and let’s move into the future.

The Question of Docker, The Future of OS Virtualization

In this article I’m going to take a look at Docker and OS Virtualization autonomously of each other. There’s a reason, which will unfold as I dig through some data and provide this look into what is and isn’t happening in the virtualization space.

It’s important to also note what methods were used to attain the information provided in this article. I have obtained information through speaking with Docker employees and key executives including Ben Golub and founder Solomon Hykes over the years since the founding of Docker (and it’s previous incarnation dotCloud, before the pivot and name change to Docker).

Beyond communicating directly with the Docker team and gaining insight from them I have also done a number of interviews over the course of 4 days. These interviews have followed a fairly standard set of questions and conversation about the Docker technology, including but not limited to the following questions.

  • What is your current use of Docker visualization technologies?
  • What is your future intended use of Docker technologies?
  • What is the general current configuration and setup of your development team(s) and tooling that they use (i.e. stack: .NET, java, python, node.js, etc)
  • Do you find it helps you to move forward faster than without?

The History of OS-Level Virtualization

First, let’s take a look at where virtualization has been, then I’ll dive into where it is now, and then I’ll take a look at where it appears to be going in the future and derive some information from the interviews and discussions that I’ve had with various teams over the last 4 days.

The Short of It

OS-level virtualization is a virtualization application that allows the installation of software in a complete file system, just like a hypervisor based virtualization server, but dramatically faster installation and prospectively speed overall by using the host OS for OS-level virtualization. This cuts down on excess redundancies
within the core system and the respective virtual clients on the host.

Virtualization in concept has been around since the 1960s, with IBM being heavily involved at the Cambridge Scientific Center. Over time developments continued, but the real breakthrough in pushing virtualization into the market was VMware in 1999 with their virtual platform. This, hypervisor level virtualization great into a huge industry with the help of VMware.

However OS-level virtualization, which is what Docker is based on, didn’t take off immediately when introduced. There were many product options that came out over time around OS-level virtualization, but nothing made a huge splash in the industry similar to what Docker has. Fast forward to today and Docker was released in 2013 to an ever increasing developer demand and usage.

Timeline of Virtualization Development

Docker really brought OS-level virtualization to the developer community at the right time in regards to demands around web development and new ways to implement effective continuous delivery of applications. Docker has been one of the most extensively used OS-level virtualization tools to implement immutable infrastructure, continuous build, integration, and deployment environments, and to use as a general virtual environment to spool up resources as needed for development.

Where we Are With Virtualization

Currently Docker holds a pretty dominant position in the OS-level virtualization market space. Let’s take a quick review of their community statistics and involvement from just a few days ago.

The Stats: Docker on Github -> https://github.com/docker/docker

Watchers: 2017
Starred: 22941
Forks: 5617

16,472 Commits
3 Branches
102 Releases
983 Contributors

Just from that data we can ascertain that the Docker Community is active. We can also take a deep look into the forks and determine pull requests, acceptance of and related data to find out that the overall codebase is healthy with involvement. This is good to know since at one point there were questions if Docker had the capability to manage the open source legions pushing the product forward while maintaining the integrity, reputation, and quality of the product.

Now let’s take a look at what that position is based on considering the interviews I’ve had in the last 4 days. Out of the 17 people I spoke with all knew what Docker is. That’s a great position to be in compared to just a few years ago.

Out of the 17 people I spoke with, 15 of the individuals are working on teams that have, are implementing or are in some state between having and implementing Docker into their respective environments.

Of the 17, only 13 said they were concerned in some significant way about Docker Security. All of these individuals were working on teams attempting to figure out a way to use Docker in a production way, instead of only in development or related uses.

The list of uses that the 17 want to use or are using Docker for vary as much as the individual work that each is currently working on. There are however some core similarities in what they’re working on where Docker comes into play.

The most common similarity among Docker uses is simply as a platform to build out development testing environments or test servers. This is routinely a database server or simple distributed database like Cassandra or Riak, that can be built immutably, then destroyed and recreated whenever it is needed again for test and development. Some of the build outs are done with Docker specifically to work up a mock distributed database environment for testing. Mind you, I’m probably hearing about and seeing this because of my past work with Basho and other distributed systems programmers, companies, and efforts around this type of technology. It’s still interesting and very telling none the less.

The second most common usage is for Docker to be used somewhere in the continuous delivery chain. The push to move the continuous integration and delivery process to a more immutable, repeatable, and reliable process has been a perfect marriage between Docker and these needs. The ability to spin up entire environments in a matter of seconds and destroy them on whim, creating them again a matter of moments later, as made continuous delivery more powerful and more possible than it has ever been.

Some of the less common, yet still key uses of Docker, that came up during the interviews included; in memory cache servers, network virtualization, and distributed systems.

Virtualization’s Future

Pathing

With the history covered, the core uses of Docker discussed, let’s put those on the table with the acquisitions. The acquisitions by Docker have provided some insight into the future direction of the company. The acquisitions so far include: Kitematic, SocketPlane, Koality, and Orchard.

From a high level strategic play, the path Docker is pushing forward into is a future of continued virtualization around, as the hipsters might say “all the things”. With their purchase of Kitematic and SocketPlane. Both of these will help Docker expand past only OS virtualization and push more toward systemic virtualization of network environments with programmatic capabilities and more. These are capabilities that are needed to move past the legacy IT environments of yesteryear which will open up more enterprise possibilities too.

To further their core use that exists today, Docker has purchased Koality. Koality provides parallelizable continuous integration, deployment, and related services. This enables Docker to provide more built out services around this very important.

The other acquisition was Orchard (orchardup.com). This is a startup that provides a Docker host in the cloud, instantly. This is a similar purchase as the Koality one. It bulks up capabilities that Docker had some level of already. It also pushes them forward with two branches of capabilities: SaaS based on the web and prospectively offering something behind the firewall, which the Koality acquisition might have some part to play also.

Threat Vectors

Even though the pathways toward the future seem clear for Docker in many ways, in other ways they see dramatically less clear. For one, there are a number of competitive options that are in play now, gaining momentum and on the horizon. One big threat is Google’s lack of interest in Docker has led them to build competing tooling. If they push hard into the OS level virtualization space they could become a substantial threat.

The other threat vector, is the simple unknown of what could become a threat. Something like Mesos might explode in popularity and determine it doesn’t want to use Docker, and focus on another virtualization path. In the same sense, Mesos could commoditize Docker to a point that the value add at that level of virtualization doesn’t retain a business market value that would sustain Docker.

The invisible threat around this area right now is fairly large. There’s no greater way to determine this then to just get into a conversation with some developers about Docker. In one sense they love what it allows them to do, but the laundry list of things they’d like would allow for a disruptor to come in and steal the Docker thunder pretty easily. To put it simply, there isn’t a magical allegiance to Docker, developers will pick what helps them move the ball forward the fastest and easiest.

Another prospective threat is a massive purchase by a legacy software company like Oracle, Microsoft, or someone else. This could effectively destabilize the OSS aspects of the product and slow down development and progress, yet it could increase corporate adoption many times over what it is now. So this possibility is something that shouldn’t be ruled out.

Summary

Docker has two major threats: the direct competitor and their prospectively being leapfrogged by another level of virtualization. The other prospective threat to part of the company is acquisition of Docker itself, while it could mean a huge increase in enterprise penetration. In the future path the company and technology is moving forward in, there will be continued growth in usage and capabilities. The growth will maintain in the leading technology startups and companies of this kind, while the mid-size and larger corporate environments will continue to adopt and deploy at a slower pace.

A Question for You

I’ve put together what I’ve noticed, and I’d love to see things that you dear reader might notice about the Docker momentum machine. Do you see networking as a strength, other levels of virtualization, deployment of machines, integration or delivery, or some other part of this space as the way forward into the future. Let me know what your thoughts are on Twitter or whatever medium you feel like reaching out on. Of course, I’d also love to know if you think I’m wrong about anything I’ve written here.