On leadership and the Ghost platform

I wanted to spend some time reflecting on my leadership behaviours and values.

I also wanted to try this new blogging platform called Ghost.

So I combined the two. Earlier today I spun up an AWS micro using the Ghost AMI, configured it (required vi, which I can never remember how to use), created a subdomain and pointed it to the server using CloudFlare (which is instant), and then got into writing!

The pieces are self-reflective to help clarify my thinking, but I tried to write them with an audience in mind, so hopefully you’ll find it thought-provoking too.

So far I have just a few short pieces:

I like Ghost. It focuses on writing and helps put you in a state of flow. I still have to install Disqus for comments, which requires editing files beyond what I want to do with vi, therefore I’ll need to install an SFTP server. I decided to do that later so I could stay in the writing zone.

Update: I tried updating the theme to include the Disqus comment system using this instruction. Then made sure it was using Forever to stay up and did a restart.

Posted in observations | Leave a comment

How cloud-oriented is the app?

RightScale lets you easily deploy an application to a cloud provider like AWS or Rackspace. We do this by abstracting your server infrastructure design to a layer above that cloud provider. Consequently, I often have conversations about whether an existing application is a suitable candidate for the cloud. The less common question, but of great significance, is whether an application can really exploit the capabilities of the cloud.

I recently developed a little framework which helps to explain some of the dimensions which effect how well an application can go beyond a forklifted application towards one which is architected for the cloud.

What’s a cloud-oriented app?

It’s fairly well understood amongst web-based businesses who use the cloud, that you have to design for failure. You have to assume any element of your infrastructure or application stack could fail. Perhaps Netflix made this most famous with their chaos monkeys who deliberately destroy service elements, so Netflix could test what happened and make sure the system was resilient. At reInvent last year, it was thematic in key presentations from AWS. This is no secret.

Applications which are designed with a service-oriented architecture should gracefully degrade if a service on which it depends are no longer available. A simple example is if an application presents a menu choice, and that menu is derived from a database, but that database is not there any longer – it’s failed for whatever reason – the application needs to keep working anyhow.

In traditional enterprise data centres, the design assumption applications can make is that the hardware environment will be available 99.5% of the time or more. Core systems will expect 99.9%, and so on. The application at the top assumes all its layers underneath are working, and generally does not fail gracefully.

Cloud applications dismiss that availability assumption, and not because the cloud is necessarily less reliable, but because it makes for a more resilient and scalable application in any case, and because the cloud is a distributed system.

Cloud apps can scale horizontally (more servers), instead of vertically (CPU). Doing this requires the constituent parts are able to scale that way.

What are some non-cloudy ways?

So, some applications expect to find on a local file system, or store sessions in a low-throughput database. If the app is doing that, it’s not very cloud-oriented. That’s the “data location: actual server” column of this diagram.

I itemised a few technical elements which can be done in a cloudy way, or not:

  • user data,
  • session management,
  • infrastructure config and
  • application configuration.

Some web-based applications like Drupal, phpBB and Concrete CMS store their application configuration and user data in ways is not compatible with horizontal scalability. They need some adaption to use GlusterFS, for example, as a distributed file system, and do not have public reference architectures for putting them into a horizontally-scaling environment.

I’ve mapped out those elements against the continuum from least to the most cloud-oriented method (the solutions I’ve listed are not exhaustive).

how cloud oriented is the app?

For horizontal scaling, ideally the application needs to be installed unattended in a scripted fashion. (So, you can’t do the next/next wizard of phpBB, for example). This in turn means the configuration of the application needs to be defined and stored in a database or file which is accessible to the installer script, and then you have questions around version control and persistence of that configuration information.

People and process change management also needs to be considered: if the user edits a configuration on a local file, but doesn’t push that configuration back to the SVN, it will not persist when the server is rebuilt next time.

Some apps go easily to the cloud

Some applications will adapt well to a horizontally-scaling cloud environment, others will not.

Those which do not scale horizontally may still be suitable for deployment to cloud, but in a forklifted fashion. You’d retire the existing server and move the application to an equivalent cloud server, which is most likely cheaper. The application may not like to be turned off at night to save you money. If it can, though, then your financial savings increase markedly. Turning off servers is one key to unlocking cloud economics. Scaling horizontally is another key.

In either case, it’s a lot more intellectually fulfilling and interesting that staying with the status quo.

Posted in geek | 3 Comments

Two worlds of cloud

Increasingly it’s clear to me that there are two worlds of cloud users, the populations of which often do not realise they are world’s apart. The first is inhabited by traditional enterprise IT users. Extending the analogy, it has more terra firma than clouds.

The second has a population of cloud-first companies who use cloud-era technologies. They largely do not use Windows, SQL Server, WebSphere or similar packaged software, and instead their first choice is a SaaS or Open Source equivalent like Debian, MariaDB, Couchbase, django or a plethora of technologies and languages.

The world of enterprise IT has a density of applications, systems, hardware and processes which create a huge gravitational force towards status quo. Change is costly and risky. Interestingly, the perspective of enterprise IT folk actually distorts how clearly they can see what cloud is, and can offer. More on that later.

New cloud technologists are often fairly crap at risk management and change control until they’ve either hit some kind of scale (which brings the need for change control and rigour) or failed in some way. There’s often a religious rejection of anything that sounds like waterfall method (like planning), and a sprint to the next scrum instead.

I could have expressed this idea as a continuum with enterprise IT represented by banking at the most extreme on one end, and a seed-funded fledging internet startup with a few clients at the other end. One has great traceability and stability, the other has great spontaneity and agility.

James Staten from Forrester used a nice diagram which expressed how enterprise IT guys see cloud differently to cloud-era folk. The essential difference is that enterprise guys see cloud as the next logical evolution of virtualisation, whereas cloud developers see it as a programmable service. He comments on how this leads to private cloud implementations failing because the enterprise guys make it robust and highly change-controlled, whereas the developers just want an API that abstracts all those details away so they can launch a server in 5 minutes.

Google the paper “Rise of the new cloud admin” if you’re not a client, and read it. It’s absolutely spot-on awesome.

Result is, in the real world, that an enterprise decision maker will look at his sunk investment in a VBlock environment and ask me if we can make it into a cloud. Sure. But that question only gets you to first base.

You can run a private cloud on enterprise-grade hardware, but you don’t have to of course. Let’s assume you do though, because you want more HA on the private cloud, because that’s the norm. Then the usual first step is to make it easier for test and dev workloads — so your application teams can launch a base server really easily to test a new version of an application. Making that happen with a self-service portal isn’t hard either.

Here’s the catch though. Enterprise users are then putting traditional packaged applications into a private cloud, managed by ITIL, subject to change control and normal InfoSec policies. So you get a non-agile result in some dimensions. The dimension you have made cloudy is provisioning the hardware and operating system layers of the virtual machine, using a self-service portal.

Often, the installation of that enterprise application is not automated. If it is, it’s not multi-cloud portable, nor version-controlled so it can be elegantly moved through a lifecycle of test, staging and production.

The difference in the cloud world is that the application would have been defined and installed using configuration scripting. The application gets considered first, the hardware and operating system are almost assumed.

The interesting challenge for enterprises is to simultaneously:

  • look for enterprise applications they can move to the cloud,
  • identify applications they can migrate to cloud-era technologies, particularly considering NoSQL or graph databases, or cloud-friendly horizontally-scaling frameworks like django
  • identify applications or systems where configuration-based infrastructure definition is possible (RightScale uses Puppet or Chef), instead of virtual machine images which are managed under change control.
Posted in observations | Leave a comment

Notes from keynote at Cloud Inspire, Seoul

I’m delivering the keynote talk at SK Telecom’s Cloud Inspire event in Seoul, and in preparing my talk about hybrid clouds I reviewed many sources. For those wanting more detail, I have put together some notes below.

Research on cloud adoption, applications and cloud developers:

  • RightScale Cloud Survey also as of Jun-2012
  • Everest, “Enterprise Cloud Adoption Survey 2013” cited in Gigaom article
  • Forrsights Developer Survey, Q1 2013
  • Forrester Global Cloud Developer Online Survey, Q3 2012
  • Forrester, November 2012 “Don’t Move Your Apps To The Cloud”

Security and risk

User stories:

Hybrid cloud architecture:

Agility 

Reasons for hybrid:

 

Posted in observations | Leave a comment

Roll your own “enterprise” hardware

Not long ago, I found an article talking about how Google is Intel’s fifth largest customer for server chips. I thought it was a brilliant barometer of the disruption that cloud computing is making to the traditional enterprise hardware providers, like Dell, IBM and HP.

RedMonk recently wrote an article surveying the various data points which go to this issue, including a few I’ll call out:

  • Rackspace are joining Open Compute, the new standard for servers needing cloud-scale economics
  • Quanta will one day sell directly to enterprises, say for for private cloud

The Open Compute standard drives down manufacturing costs by removing vanity and wasteful items like logos, flashy lights, DVD drives and serial ports. Manufacturers like Quanta (based in Taiwan) then build to your specification.

The standard also considers operational effectiveness, such as cooling and space saving, for example, the Open Rack width lets you fit more 3.5″ HDD side by side than the traditional rack.

It’s clever because it’s so obvious, and it’s so disruptive.

So how about the software layers?Road-to-cloud

At one layer of the software stack, we have open source hypervisors KVM and Xen which compete with VMware’s ESX.

Technically I could say there’s competition at the cloud orchestration layer with OpenStack and CloudStack, but in the context of disrupting existing markets, I think it’s more useful to consider them as platforms which allow the creation of alternative ecosystems to that of AWS. Fascinating, really.

Recently IBM and earlier VMware throwing their weight in the ring with OpenStack will help bolster the engineering depth in OpenStack, along with the hundreds of existing contributors. RightScale is also a corporate sponsor.

I don’t believe any single cloud infrastructure provider will monopolise, but that we will see a future of multiple clouds interoperating. Since there is no singular interchange standard, this is going to be an interesting process.

Thanks to RedMonk, here’s the RedMonk article.

Posted in geek | Leave a comment

Noisy neighbours in cloud computing

Congestion
One solution to traffic shaping a congested network

Noisy Neighbours is the current term to describe an age-old phenomena when many users share a medium, in this case with cloud computing it’s sharing CPU, storage and networking. In IP networking, one solution is Quality of Service (QoS) which helps prioritise network traffic so the most important packets get through whilst others get shaped. The shaped packets could describe the bossy packets as being noisy neighbours.

Other methods to solve this contest for shared network resources include Weighted Random Early Detection and Weighted Fair Queuing.

Operating systems for mainframes had to solve it too, with various methods to divide the monolith between its various hosted applications.

In cloud computing and virtualisation in a traditional data centres too, noisy neighbours are a problem that effect the consistency of disk and network throughput and CPU performance that an application will see. Applications will still work, but for example if they need to finish a batch job within a certain time window it may be hard to predict how many hours it will take, or users of a web-based application may observe varied response times.

In cloud computing, the noisy neighbour issues get addressed by the provider at various technical layers (hypervisor for example), but also at commercial layers with some cloud providers offering opt-in paid services for those users who require higher throughput or greater consistency on disk performance, for example AWS Provisioned IOPS.

It also gets addressed by the user, who can deploy the granular nature of cloud computing as a workaround to itself. For batch workloads with dozens of servers, some users buy more servers than needed, run some speed tests, and discard the servers that are under-performing. RightScale clients sometimes stripe their storage across multiple block storage volumes to normalise performance. Many RightScale clients choose which cloud best suits their application workload and could try Rackspace, AWS, Google Compute Engine, Azure or private clouds using OpenStack or CloudStack.

Posted in geek | Leave a comment

Intel provides view into cloud shift

Yesterday’s fascinating article in Wired says that Google is now Intel’s fifth largest client for server chips. In 2008, this Intel division had 75% of its sales to IBM, Dell and HP. Five years later, the same 75% is spread across eight buyers, the fifth being Google.

This provides a view of the shift away from owner-operator enterprise IT and towards buying compute as a service through Amazon or other IaaS providers.

It’s fair to assume that Amazon and Facebook are in that top eight either directly or by proxy with their manufacturer (Quanta is one such). The article also mentions Facebook’s open compute project, a strategy to reduce the acquisition and ownership costs of the hardware, as they cool more effectively, too.

IDC and fellow analysts don’t have solid data on how much of the total addressable server market is taken by this new breed of buyer. Wired cleverly called it the “server world’s Bermuda triangle” because of analyst’s poor visibility of spending in that zone.

The economic factors which favour IaaS providers AWS and Google Compute Engine:

  • They have equal or better buying power to IBM, Dell and HP
  • They spend less on energy. AWS and Google are run in data centres with energy efficiency (“PUE”) of about 1.2; about 50% lower than most enterprise data centres which are around 2 to 2.4. Facebook’s Oregon DC runs at 1.11. In Australia, the NSW Whole of Government new data centre is likely to have PUE of 1.29
  • Their scale plus automation systems drive down operational costs to a greater degree than an enterprise IT buyer can easily achieve.

At this scale, with the prior stalwarts of server sales losing ground to providers who don’t resell the chips but instead sell a service, the inexorable domination of cloud computing is obvious.

Facebook’s Oregon datacentre, a server aisle.

In 2008 Nicholas Carr gave his now-famous analogy in the Big Switch, that IT is leaving the company data centres, and shifting to cloud computing, mirroring the shift that electricity generation had from local steam-generation to the newly invented power grid.

Part of the reason enterprises are moving slowly to cloud is their (necessary) dependence on existing stable applications. That’s fair enough. All of my clients have systems which have been designed within the concept of traditional enterprise IT.

The next advance toward cloud comes from changes to the buying and architectural decisions of the IT organisation. It is to think of compute as a service, and to move to a service oriented architecture. For example, to design apps with the assumption of hardware will fail underneath it, as opposed to the current state where the apps can trust the hardware to be available > 99.9x% of the time.

Netflix put this eloquently in a zdnet article. In explaining the philosophical design shift, their cloud architect said:

The typical environment you have for developers is this image that they can write code that works on a perfect machine that will always work, and operations will figure out how to create this perfect machine for them. That’s the traditional dev-ops, developer versus operations contract.

Instead he says the way Netflix now do it is different. This is the point I’m making. Netflix:

We don’t keep track of dependencies. We let every individual developer keep track of what they have to do. It’s your own responsibility to understand what the dependencies are in terms of consuming and providing [services].

We’ve built a decoupled system where every service is capable of withstanding the failure of every service it depends on.

Everyone is sitting in the middle of a bunch of supplier and consumer relationships and every team is responsible for knowing what those relationships are and managing them. It’s completely devolved — we don’t have any centralised control. We can’t provide an architecture diagram, it has too many boxes and arrows. There are literally hundreds of services running.

A longer treatise on service oriented design is in Steve Yegge’s accidentally published rant about how well Amazon get it (and how Google don’t, but that’s now to be tested in Google Compute Engine). They started the transformation to a service-oriented architecture at Amazon in about 2002.

It was about 25 years ago that I started programming, and about 15 years ago I stopped. I never had to develop in this paradigm (but I keep wanting to try). I haven’t led an IT organisation through this scale of change. So I’m not naively saying this change is easy. Steve’s post gives some story to the huge difficulty of it.

I don’t think it’d be easy to change a large organisation’s IT from a traditional mindset to one that fully exploits cloud computing, but damn the rewards and journey would be awesome.

Posted in observations | Leave a comment

Setting up a cloud server using RightScale

RightScale is a cloud computing management pane, which can handle AWS, Rackpace, Softlayer and other IaaS providers plus private clouds from Eucalyptus, OpenStack and CloudStack. I know it’s awesome, because of its abstraction and capability, but I haven’t used it personally.  So, to reconcile this dire gap in my life experience, I decided to create and control a cloud server with RightScale. I wanted to see how easy, or otherwise, it was to do a basic task with such a highly-sophisticated management tool.

Some background: I’m not an engineer, I’m a solution sales guy with a geek orientation. I have mostly webmaster technical skills. To be clear: I don’t know what /etc means in Linux, and when I tried learning Ruby recently I realised that having not coded for over 15 years definitely made me a novice again. But earlier this year I migrated my cloud server with multiple cPanel accounts to a shared host and used cloudflare to minimise downtime. My day job is in sales: major account management with Cisco Services.

However, by the end of this little project, I realised that someone much less technical than me could have done this. At the same time, someone more technical than me would really maximise the potential and be able to use RightScale for its true purpose. I barely touched its surface.

With a nod to RackSpace’s announcement of their Sydney data centre opening later this year, I chose to create an account with them. I previously had used a cloud server with Liquidweb.

Creating a RightScale account is much like you’d expect. No credit card required. I’m obviously going to use a free account for this test. The process steered me quite easily to the quick start guide.

In Rackspace, as an aside, I was annoyed whilst setting up secret Q&A so I could be verified, I repeatedly got a error about an invalid password. On third attempt, it was accepted. No apostrophes accepted was one of the problems. Also I was reminded of a pet hate, which is security question choices that include “your favourite” item x or y. My favourite things change over time, if I have any, so I’ve always thought these are the stupidest options to suggest because challenge questions should have an unequivocal answer. (I later worked that in fact Rackspace was asking me to reset my secret Q&A each time. Some UI problem.)

Rackspace gave me a call to verify my account, so I was confirmed to be activated.

In RightScale then, the first step once you have an account with a cloud provider is to add it so RightScale can act on your behalf . I took the API key for my Rackspace account and stuck it into the RightScale dashboard, but I was getting an error of “Too many requests…”. After a few tests this error disappeared without me being able to isolate root cause. Pretty odd.

So at this point, I have a RightScale account and a Rackspace account. No servers created on Rackspace, but I’m ready to do so. Next up, creating servers.

If you’ve ever bought a shared host, VPS or linode server, you’d know how easy it is to buy a webserver with (or without) cPanel.

The difference when using RightScale is that instead of directly creating a server within the IaaS provider’s control panel, I do it through Rightscale – like a remote management interface – using one of the ServerTemplates provided by RightScale (or of my own devising if I had the skills).

This is a powerful abstraction. The server deployment workflow looks like this:

The RightScale marketplace of ServerTemplates is comprehensive too – including one for a high availability MySQL 5.5 master/slave server, another is provided by IBM for a DB2 Express (their free-edition) and there are a few memcache templates too. Because the build process is scripted, you can create one that’d function identically on a Rackspace or an AWS server, or others if the template supports it. This in turn means you can more easily deploy new capacity across multiple cloud providers, and that lets you design around zone outages.

Anyhow, RightScale thoughtfully provide a basic template for LAMP server with WordPress, and the quick start guide covers that too. So I go and add it.

Choosing that ServerTemplate, and a few steps later the server is ready within my RightScale account.

Adding LAMP with WordPress within RightScale

But it’s not live on the Rackspace servers yet. I need to choose to do that, when I’m ready, with Launch.

After this, I saw a scary-looking page with dozens of pre-completed fields. I check the quick start at this point, wondering if I had to customise it. This stage is where you could make the server unique and provide any overrides on settings like password or database prefix. It’s defined in my template to inherit and scripted such that I don’t have to change things; this template is designed for instructional purposes and not production.

Then, RightScale starts the creation process on Rackspace on my behalf. I get some status progress information in RightScale, with the ‘events’ sidebar being a bit hard to decipher due to lack of familiarity.

Server monitoring during provisioning

Within Rackspace, I can see it’s been created.

Server has been created within Rackspace, as shown via control panel

Then, a few minutes later, I’m given a public IPv4 address, click it, and I can see the WordPress registration screen. I could SSH into the server too. My server is running.

If I’d had the need, I could have set up a separate MySQL server, a memcache server and WordPress with W3 Total Cache installed, or something like that. That’s really what RightScale is designed for – managing multiple IaaS servers, including financial reporting on their usage.

Creating those servers through RightScale, using templates, would give me the capability to migrate the lot to other cloud providers, or increase the availability by serving from more than one IaaS provider.

This really is such a neat system.

Posted in geek | Tagged | 3 Comments

Considering the Agile philosophy within deal pursuits

I have been reading about Agile software development philosophy for years and it seems that the core ideas are worth exploring in relation to major deal pursuits. I think I first came across Agile thinking in about 1999, in the form of extreme programming. At the time though I only looked at it through the lens of selling software development services, not major deals.

What is the Agile Philosophy?

As described in the Agile Manifesto, the Agile Philosophy is characterized by the following values:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan.

I thought it’d be interesting to consider how deal pursuits could benefit from Agile.

Which ideas from Agile could work in Major Deal Pursuits?

Evidently amongst these core principles, there are ideas that have a degree of synergy with Major Deal Pursuits.

Within the remit of Major Deal Pursuits there is an element of business creation and business model generation. It makes sense to apply Agile Philosophy and ideas from Lean Startup, since a more tailored solution is likely to emerge when there is less waste and more scope for feedback from the client. In turn, this is more likely to result in a deal being sealed and is especially the case for solutions that include large-scale software development, design and construction of datacentres or outsourcing entire network operations.

Integrating Agile as a client feedback loop would obviously lead to overall improvements compared to traditional processes such as Waterfall. Certainly Agile would be a useful way to fix any issues prior to implementation in a similar way that RUP advocates testing after small increments of development. Certainly if a solution doesn’t work in terms of cost, risk or compliance it should be changed rapidly. Agile seems to delight in scrapping a solution in order to respond to change and this is an attitude that bid teams could consider.

Which Agile ideas could prove problematic?

There are of course elements that do not fit so well with Major Deal Pursuits.

Agile does not fit well with the fact that Pursuit Leaders work to strict deadlines. Pursuit Leaders like to gauge interest quickly and so there is the potential for customer interaction to adversely affect the familiar and expected deal-making process. Whereas the Agile Philosophy states a preference for the shorter timescale, for the Pursuit Leader, it is a necessity.

Similarly, during the negotiation phase you need to demonstrate that your solution is worthy of, say, a $100M+ investment. Over-reliance on feedback could undermine the perceived robustness of a solution and your firm’s expert authority become lessened. Indeed, for a Pursuit Leader, the focus is on agreeing a large contract and this contradicts one of the principal values of the Agile Philosophy; customer collaboration over contract negotiation.

Thull’s diagnostic selling method outlined in “Mastering the Complex Sale” however could very well benefit from taking an Agile view of the method, as its method does not presume a time-bound pursuit window.

In practical terms, there are also tight restrictions on how much information can be shared about the solution design and it could be harmful for the Pursuit Leader to be seen as reluctant to share information with individuals or unable to explain why certain ideas cannot be implemented. It may be worth looking at ideas around “minimum viable audience” and creating a scalable proposition that could be presented to the client. Interesting, but would only work if you were in a position of control during the negotiation, rather than being in a position of competition with others.

How would a Major Deal Pursuit Leader go about implementing Agile?

Let us turn our attention to the possibility of practical implementation of Agile in Major Deal Pursuits.

The Pursuit Leader would have to consider timing as a primary concern, namely, at what point during the process would you look to implement Agile? Do you look to instill Agile as an over-riding culture for the pursuit or should there be specific checkpoints for Agile? Beyond that, how exactly do you build agility into the process? The issue of personnel is another important consideration here. Is Agile the responsibility of the whole team or does it make more sense to assign Agile relationship building to a particular team member? If so, do you appoint the role based on character and soft skills or on status and position within the bid team?

What do we need to consider most closely when thinking about Agile Philosophy within the context of Major Deal Pursuits?

The main factors to consider when potentially assimilating Agile into Major Deal Pursuits can be distilled into these five criteria:

  • Impact on Negotiation
  • Benefits for Solution Design
  • Adverse Effects on Deadlines
  • How to Harness the Potential of Collaboration
  • Implementation and Personnel

As we have seen, some elements of Agile are more applicable than others. The important thing to take away is that Agile gives the scope to find a good fit for the parts of the philosophy that will work and offers some intriguing possibilities for consideration.

Posted in complex deals | Leave a comment