It’s time to Scale - A look back - Part 1 - Read time - 4:30

My career in technology carries a number of repetitive themes, which seem to follow me from company to company and role to role:  Scale, openness, taking calculated risks, and most importantly working with a team of people who carry corresponding views!  The specific topic of scale has deep-seated history in IT, and continues to drive many of our technical discussions and challenges - however, today I am facing scale in a more personal sense.  It is time for me to "scale-out”...but more on that later! Before I tell you all about my new adventure with Arctiq, I’d like to share with you where I have been and some valuable experiences that got me here. 

In the early days it was easy to scale:  Just buy a bigger server!  As a sysadmin, with many hats, keeping-the-lights-on, I had a lot of opportunity to "scale-up" whenever required or desired.  Mammoth 8U servers with only 4GB of RAM...but no one complained as virtualization didn’t really exist yet!  

When Linux waddled slowly onto the scene, I grabbed hold of it...now things were getting exciting!  But not everyone was a believer, like today, many people feared change.  Back in the early 2000’s, I had just finished porting a couple application workloads over to Linux,  my manager at the time saw the console on a server one day and I took an opinionated earful about Linux.  “How can a business ever trust a bunch of people writing code in their basements?”. Thankfully the last 15 years have built the passionate basement-dwelling coders into a worldwide open source community. Nobody questions Linux any longer.

My path to virtualization differed from most. Solaris X86 and zones paved my way from Xen R&D projects to real enterprise virtualization. Cheap servers with lots of RAM, open source software, and a willingness to experiment.

This is where SCALE really started to accelerate.  We were doing automation and DevOps back then (we just called it configuration management at the time!), so everyone was willing to try something new and risk failing fast.  We tried to solve all our real infrastructure and application problems with open source tools and a lot of hard work.  Supportive company leadership left us alone to try and make it a success...as long as we hit our SLA’s (and we always did!).

Then Larry bought Sun.  Things changed.  Fast. The threat of Sun removing support for Solaris 10 X86 was real, especially on inexpensive non-Sun hardware.  I still remember that conference call:  Our CTO listened, and calmly said after “OK, go find another option.  We can’t continue down this road”.  His leadership and support helped me through one of my first major career “pivots”.  R.I.P. Sun Microsystems (you were way ahead of your time).

VMware would have been the easy way out - expensive, but easy!  As usual, we opted for open source, and took the path less traveled...Red Hat.  We were not newcomers to RHEL, but it was the newly acquired RHEV (from Qumranet) that caught our attention.  We started with a POC, and quickly became part of their Beta program.   Again, through experimentation and hard work, we broke that code over and over again...with full Red Hat support!  Many of the features you see in today’s RHEV come from our testing and requests back in the day.  

We successfully ported thousands of Solaris Zones to RHEV, and deployed HyperV for our many Windows workloads.  The tools helped us pave a path to ‘showback’, as we were already consuming based on “cost per VM” and “cost per GB".  And with all the current press around the Microsoft and Red Hat partnership, you’d think we had a hand in that too (we didn’t...directly!).

But we still lived in the world of physical infrastructure.  We took it upon ourselves to remove the hardware dependency, enable leading technologies, and simplifying the deployment.  All in the name of scale!  With Cisco’s support, we POC’d their new UCS platform to great success.  It achieved our goals of standardized, stateless, automated configurations, while their Service Profiles solved our capacity utilization challenge - allowing the same UCS cluster to run a VDI stack during the day, and GRID computing after-hours. Somehow we always found a way to buck the trend: We were running RHEV and Hyper-V on UCS when everyone else was running VMWare!In the end, we were able to replatform two data centres and, most importantly, deliver on the aggressive capital savings goals the project set out to achieve.  We took a standardized approach to everything.  Everything could “scale out”.  We had a great handle on our cost model and treated our infrastructure like a commodity…another foresight?I learned early on that the best way to learn is to jump in with both feet.  Writing this blog forced me to think about how I got where I am today.  If I could offer a young IT professional some advice, I feel the most important aspects to my career growth fall into a few categories:

  • Find a place to work where you will be challenged everyday.  Have a willingness to experiment (and potentially fail!) outside your comfort zone.

  • Build strong partnerships in life and at work - people who challenge you, support you, and let you learn.  This extends to suppliers and vendors - and never burn these bridges, your peer today could be your boss (or customer) tomorrow!

  • Attitude and aptitude trump experience and pedigree.  Trust is a key component.

  • Accept the challenge of extra work.  Most people don’t get the chance to prove their true capabilities.  You will get paid back for it later in life, so embrace the experience.

  • Finally, when you are no longer challenged, move on.  I don’t mean quit; make a fair, smooth transition plan and people will be happy for you and help you succeed.

Part 2:  The path forward… A leap of faith