Sunday, 28 April 2013

Going Cloudy Part 6 - Monitoring and Load Balancing

The Monitoring Project

When using the traffic manager, it needs an endpoint on your service to hit in order to determine whether or not it is responding. This endpoint must be open (no authentication) and must be a path on your service. As you may have noticed, in the ServiceDefinition, I instructed my Site, Web Service and REST Services to only respond on port 80 or 443 to a specific host header, one which the traffic manager cannot provide as it will be accessing the instance directly, in other words via it’s address.

The simple way to solve this would be to make one of the services also respond on an endpoint without a Host Header and give it an unauthenticated ActionResult somewehere that the Traffic Manager could access.
Now I’m no security expert, but I do my best, and I didn’t want any of my core services hosting an unauthenticated endpoint and I want to make sure that they are only accessible by their public urls. Therefore, I created a project that is just for monitoring the health of my services. Initially, this will just service the traffic manager. In the long term, it’s a convenient place to put any generic monitoring functionality.
In order to service the Traffic Manager, it just has a GET action on the Home controller which does a quick database connectivity check and returns a 200 if everything is okay. Before go live, this will be extended to check all the main services are responding.

Using the Azure Traffic Manager

As previously mentioned, I need to have service instances in both the EU and the US. I didn’t want users to have to decide which one they went to by going to or, that’s just a bad user experience in my book.
The Azure Traffic Manager provides you with a load balancer that you can use between Cloud Services in the same or different regions. I am using it in the performance mode, which routes the user to the nearest (and presumably fastest) service to them.
The Traffic Manager uses the aforementioned Monitoring project to determine if the service is unavailable. It regularly hits the health endpoint and if it does not receive a 200 within 5 seconds, it considers the instance to be down. When the instance is determined to be down, on the next DNS refresh, the records will be updated to point to the next service on the list. There will still be some down time while all this happens, but it will most likely only be a couple of minutes.

Tuesday, 23 April 2013

Breathing new life in to old netbooks

You don't see many netbooks around these days, and for a very good reason. When they were new, their performance ranged from ok to rubbish and they were no good for any real computing. Looking back, they were more like the first stab at the sweet spot between a smart phone and a full-on laptop/desktop, a gap which has been much more successfully filled with tablets in recent years.

I have 2 netbooks, one is a Samsung NC10 and the other a Dell Mini 9. Both of these have an Intel Atom N270 @1.60 Ghz and 1GB of RAM.
The Dell ran XP (badly) and the Samsung has run XP (badly), Windows 7 (really badly), and Ubuntu (just about acceptable).

When Windows 8 was released, one of the things that I noticed the most was the very noticeable improvements in general performance. Some of this was clearly down to the removal of superfluous effects but that could not explain all of the improvement

Just after Windows 8 came out, I installed it on the Dell. Performance was much better than I expected, better than XP. The Dell's odd screen size meant some hacking to get the screen in to 1024x768, even then there is a small amount of blur. It works okay for some basic note taking and browsing, but I wouldn't stress it with anything further.

Recently, the other half graduated to using an ASUS MemoPad for her needs, leaving the Dell spare. I decided to install Windows 8 on this too and, surprisingly considering it has exactly the same processor and amount of ram as the Dell, it performs even better. Haven't put Office on yet so it could still go downhill, but so far I am very impressed.

So if you've got some old netbooks lying around that you had written off as being useless for anything, throw Windows 8 on them you may be surprised at how nippy they are as a result.

Wednesday, 17 April 2013

Making old machines immortal(-ish) with P2V

The time comes to every PC, when it's reached the end of it's life and it's time to be turned off once and for all...Except, when that PC has old software on it with a non-transferable license. In an ideal world this wouldn't happen, but sometimes it just can't be helped - either the software is no longer available to buy and transferring to an equivalent would cost a bomb, or you'd have to buy a new license for the most recent version which would also cost a bomb.

Fortunately, this is where virtualisation becomes really useful for a small business. While small businesses may not have workloads that warrant massive clusters of VMs spanning multiple centuply redundant clusters on fault tolerant blade servers, they will almost certainly at some point experience the imminent failure of that machine - the one machine in the entire company that simply must remain. It cannot be upgraded and it cannot be replaced, it must exist forever.

In short, P2V allows you to take an existing OS install on physical hardware and convert it to run as a virtual machine on a virtual host. It's easy to do, so easy in fact that I'm not going to tell you how to do it. Some detailed instructions for performing a P2V conversion can be found here

Some tips when you're doing this:

  1. Install the converter on the machine you want to convert - I've found the agent can sometimes be difficult to get to work, if you want to get things done quickly, just install on the machine and then uninstall from the converted VM when you're done.
  2. Make sure you change the disks to Thin Provision - this will save you disk space on your virtual server. Space may not be an issue for companies with SANs, but if you're just using the drives in the server box, it doesn't hurt to save every gigabyte you can.
  3. Make sure you get the number of CPUs right - I didn't do this on the first XP machine I converted, the physical hardware had 1 core and the virtualised version 2. I ended up in a tricky situation where I had to reactivate XP but couldn't because it couldn't connect to the activation service. Eventually, after numerous attempts, it connected and I could reactivate, but if I'd set the cores I may not have had to.

Virtualisation for me takes between 2 and 3 hours, after which I have a virtualised copy of the hardware machine. As a matter of course I go through a few steps here.

  1. Create a snapshot before doing anything.
  2. Uninstall VMWare converter.
  3. A bit of general cleanup. Take out unnecessary items from services and startup, drop the visual effects (Classic mode for XP). This will make remoting a more pleasant experience as there won't be so many differing colors to transfer over the wire and render on the client side.
  4. Almost forgot this one. On your physical PC, you probably have various software installed for the graphics card (ATI Catalyst Control Center, etc), network adapters, and any number of esoteric peripherals. The majority of these you will no longer need so remove them and save your new virtual machine some work.
Take a snapshot after each of these steps just in case you get anything wrong. 10 seconds to create a snapshot could save you 2-3 hours starting all over again.