Contact Us

Tech Player – Will Ballard, CTO Gerson Lehrman Group Part 1

Recently we had the chance to have an extended and at times surprising conversation with Will Ballard, CTO of Gerson Lehrman Group, the world’s premier expert network. Our discussion covered a range of topics including the perspective of a technical leader in a large organization, cloud’s benefits and drawbacks, and the role of IT in a forward thinking organization. 

This is the first of two parts, Check out Part 2 here. Let usknow your thoughts on Twitter @CloudGathering.

Gathering Clouds: As a CTO who is leading the IT team for a major company, what is your perspective on how IT is mean to help an organization progress its mission?

Will Ballard: I’d describe my job in two parts: The first is making the entrepreneurs’ dream go, while the second is to make money for the investors. In terms of my perspective on technology or IT, we talk about projects according to my simple prioritization algorithm. It goes like this:

1)      Is the project going to make money? If yes, break down the dollar figure, and then do it.

2)      Is the project going to save money? That’s a different kind of profit, and if you can do it, or if you have absolutely no idea, we prioritize it in an easy to do order so we can deal with the speculative end while focusing on revenue and savings.

It’s the job of the tech group to be problem solvers. We help people grow their business whether they have a new product to offer or want to enter a new market or control their costs using automation in lieu of additional staffing.

GC: So you are not necessarily tech focused, but rather business focused overall from the perspective of what the tech can bring to the business. What is your perspective on cloud for the purposes of a business like Gerson Lehrman Group (GLG)?

WB: Specifically with cloud, to me it is just the deployment vehicle of choice at the moment. There are nice things about it certainly, and we’re running a couple different versions at the moment. We have external clouds, specifically We have some amount of Heroku we are using and also internal cloud with VMware and Cisco. The main thing our clouds give us as an organization is programmable infrastructure. So the fact it is elsewhere doesn’t matter to me; the fact I avoid buying equipment doesn’t matter; but if I can make a managing provisioning allocation of IT infrastructure a different kind of programming task, that really lets me re-work what IT is helping me to reorient the classic operations IT function. Then it becomes a programming job rather than a wrench turning job. That substantially changes the amount of money we spend on IT operations and staffing, because we are basically replacing the software.

GC: Would you say you are an advocate for keeping it in house?

WB: Not at all! Sorry if I said anything that even vaguely sounded like that. Our default setting for any new project is to find a SaaS [software-as-a-service] provider for it and I have no interest that says I have to have it in house to control it or if it was in house, I could do it better. I don’t think any of those kinds of things.

GC: When you were with Demand Media, Rackspace was the service provider of choice. Can you shed some light on how you view the role of an outsourced cloud provider, how you think about them, their strengths, drawbacks and what lessons from that period of your career you have carried through to today?

WB: In terms of Rackspace, the only utilization I was aware of was for our Pluck product and it was only the mechanism by which we got overseas. So the decision was made as a trade-off against outfitting and running a UKdata center. This would have been in 2007. Rackspace was fine as an MSP [Managed Service Provider]. I felt their customer service declined over time but they were a service IT organization and they really didn’t have high level automation for their managed hosting products. They ended up in a scenario where they would have to average their staff IQ points over customers because the automation wasn’t there. For our inside IT at Demand Media, we acquired a ton of companies and it was eclectic. We virtualized heavily in order to contain it down. The IT was broadly done in house and it was executed in what I call the classic IT style, in that the IT function wasn’t programming oriented, technician not engineer, really blue-collar in the sense it was a job and it was professional and you worked on it. But it wasn’t optimizing, it wasn’t programming. That was a real thing I learned: I didn’t want to be in a business of having my own data center for new activities because I didn’t see how you could be better enough at it to add to your business at all. There are exceptions at the ultra-high end, the Yahoo/Google/Facebook’s of the world. But practically any company short of that scale of number of computers, which is essentially everybody, is better off using virtualized cloud and provided assets, and ideally they obtain those services externally. I can imagine scenarios where you get penned in: you have legacy databases and there are issues of latency that limit your ability to use a remote cloud effectively. Having your own in-house IT and services is about as awesome as having your own plumber. Unless you are in the plumbing business, you aren’t going to grow your business that way.

GC: So how can IT avoid simply being plumbers? How do they become drivers of real innovation in an organization?

WB: The most important way is to be the problem solvers for the business units. It ceases to be about technology or the pursuit of technology for its own sake. It’s about having a set of tools you know how to use to solve peoples’ business process problems and automate them. That’s where I think the real value is, and it’s not in making the computers or really understanding computers and technology. It’s about understanding people’s business problems that they are trying to solve and then making the computers make part of that cost or hassle go away. Or, if people have a revenue opportunity they think they can get by building a data service or product, building that for them. You stay relevant and useful in IT by being a business person, not by being a technologist. If I was at a software development company, I could be relevant by being a technologist, but not on the IT side.

GC: GLG’s model is more closely aligned with small or medium enterprise scale. Is the cloud possible for an enterprise at a similar or larger scale? At what point does it make sense for them to own rather than outsourcing or rent for what a company needs?

WB: If you add it all up with staff, when I sit down and do the math and make it a financial decision, the way it works out is if you have a large number of systems that run 24/7 with no burst capability, so you can’t scale them up or down on demand, there is no financial justification for going to a cloud or an outsourcing provider. People get left in a scenario where they would make a decision to do that based on their ability to acquire talent to run their infrastructure and get their projects done. But I don’t think size of a company has anything to do with it. It’s about the nature of the systems. For example, if you have a monitoring system that has huge variation, which you run 8-12 hours a day and can practically turn it off at night. That system is definitely a candidate for the bursting benefits of cloud. An application like an exchange server, because it runs all of the time externally or internally, doesn’t realize the same cost break moving to the cloud because you are essentially moving the equipment around and paying for someone to make a profit. That is the hard part: are you willing to pay for someone else to have a financial profit vs. running it internally? If you’re an inefficient operation, and you have 1,000 people on one exchange administrator you probably aren’t going to be able to get that money back.

GC: What would it take for a company like GLG to migrate to the cloud? Is that an unattainable prospect or should it be done?

WB: If and when anybody ever puts together a cost model together that shows it saves more than the switching costs in 6 months. It’s purely a financial decision to me there is no short comings in the technology or security issues – it’s completely a dollars and cents decision.

GC: So your perspective is that it’s not really about the technology, it’s about your process at that point?

WB: Processes and the reality that professional vendors will be better at security and data integrity than most IT departments. When I hear IT people talking about being simply anti cloud, it sounds like job protectionism, being reactionary.

GC: GLG has a huge and geographically divergent work force and Council Member population. What is your view on how mobile is going to impact your business and also from an IT perspective how are you framing mobile in the context of your broader infrastructure?

WB: It’s more on the application development side. Broadly, I am really only interested in approving new projects if they are mobile first. Non-mobile stuff like classic web applications are really only maintenance-focused, and we don’t really do web applications. Because my assumption is mobile is so successful and pervasive, we should stop talking about it as mobile and just call it computers. I’ve got worldwide staff and Council Members, and we are taking these large portals as web applications and dis-aggregating them into smaller mobile applications. Surprisingly none of our infrastructure is impacted from this move because of our ability to build Ruby Python nodes, run it on Heroku, spin up an application and build a web service to talk to data. The only question is can you build a good mobile app, and the infrastructure doesn’t seem to be at issue. I’ve never felt pinned on infrastructure, not since Demand Media because there we had a lot of inherited, bizarre assets that would get us pinned down in terms of how they could work with whatever infrastructure choice we were making.

GC: So GLG has a lot of legacy applications, such as the Council Member portal. You recently migrated your CRM over to Salesforce and it seems to have been an uphill battle [internally]. From your perspective, are legacy applications a hurdle when it comes to cloud options for companies of GLG’s scale or are they just another component to what you have to accept when making that transition?

WB: I guess if somebody laid out a technology strategy where they want everything on the cloud, they would have to move our applications. But I wouldn’t do such a thing. I don’t see anything that says there is a specific benefit for moving a legacy application in that way. I’d have to see some cost analysis that I haven’t seen that would say “I would save money by writing my legacy application elsewhere.” Because it’s usually cheaper to just ignore it or cut the spending or staffing that makes up work for the legacy systems than it is to move them around. In terms of technical hurdles, the only technical hurdle is latency, which is to say that if you have relatively large databases, moving them around or culling for data can take a long time. It’s not practical or wise to attempt to move everything at once so you’ll always have some distance between new applications and your older applications. If you put a bunch of stuff in the cloud and you have to query the existing database that happens to be in a legacy data center, you have to work around that. In new enterprises you simply don’t have that sort of issue. There aren’t any legacy databases to worry about!

GC: You are the top technical leader in an enterprise which gives you a pretty good view into how a lot of the different prevailing industry winds could impact your business. You are also in the position where you can cherry pick the things you like the best and bring them in to help augment and make more robust the strategy you are choosing to employ. From that position as a chief consumer, what are your views on the cloud industry at large? What are the trends you see, what is interesting to you, what are the things you want to see more of or where do you see shortcomings?

WB: A raw infrastructure-as-a-service, EC2 style, is pretty widely accepted as usable and programmable. Though there are clones of it appearing, AWS’ pricing is still everyone’s favorite. That is one where incrementally you see new services, particularly from Amazon leading the pack and you can access programmatically. Amazon seems to have a new feature every month. The platform-as-a-service providers that have a little bit higher level stack don’t have that great of a value proposition to me right now because they are sort of in between. You have something like, which is a full on platform and application development ecosystem, or you have something like Facebook that has a hosted application. But the things in the middle like Parse or the various services that honestly seem quite targeted at providing data center services and application services to start-ups, I think there’s maybe a little bit too much money in the market. There’s people building start-ups for start-ups, which is not prudent and so what I really wish is that more investment was taking place to attack sort of the classic small IT application. Why hasn’t somebody built platform and services that essentially recreates the ability to do like FoxPro or Access applications quickly and easily? And I don’t mean like that they’re actually FoxPro or Access applications, rather I mean platforms with the ability for a relatively junior IT analyst who can only just barely program to go and make an application and get it on the air fairly well. Salesforce is close but it ends up being in the context of a “big app.” That sort of “little app” architecture would be really great.

GC: How do you see Amazon’s place in this industry? They are pretty much the undisputed leader in terms of infrastructure-as-a-service, but are they going to maintain their control of the market share?

WB: No, I think there’ll be increasingly a lot of competition for them because the APIs are sort of publicly available and people know them. There’s really nothing about what they’re doing that’s a giant secret; I mean, you could copy it. It’s just a matter of putting enough capital into it to make a living at it. I suspect there’ll be a lot of choices and competition… Amazon or Rackspace has their specific offerings along similar lines. IBM and Dell will move in that direction as well, as will HP. We will cease to think of simply buying computers from Dell or Lenovo or HP, instead base decision-making around purchasing “platform on demand” from those guys. Then the IT purchasing decision ends up being about mobile devices and laptops…it’ll be ultra-ultra-personal devices supported by cloud and related service. Meanwhile there will still be this whole “buying servers thing” in the middle, and we’ll view it as a weird awkward holdover like “dude, you still buy mainframes and stuff…” It’ll be one of those things where everyone goes, “Oh yeah! You’ve got one of those… cool…” We consolidated down so much by using internal cloud: a picture went around last week where we had stacks and stacks of old computers we’re throwing away. It was kind of a joke, like, “Wow! We spent money for all that? That’s a shame!” These are like 3 and 5 year old computers in most cases. But the cloud technique and that automation provisioning just works really, really well. So we knocked out 15-20 percent of our total IT costs in the last year.

And we’re doing more; I mean, we’re actually tracking by project throughput. We’re doing 3 times as many releases and projects and with all of the automation benefits we enjoy. My combined IT group, we will do between and 20 and 40 software releases a day. And because the automation level is so high, it’s really the cloud that lets us do it. I mean “cloud” in the most inclusive sense, like, whether you’re running internal or external. Once you get your automation level so fine-tuned that we literally will release one feature at a time… so we don’t do releases in the classic sense anymore; we do features. Somebody asks for a feature; we work on it; we get it done… we bring them over, we do a “show ‘n’ tell…” They go, “Hey! That’s great!” and then we ship it. And that change of workflow, which cloud lets us achieve. I think this model is the next big motion for IT shops to embrace: continuous release and development-like start-up culture. And it’s primarily achieved with cloud. You can really iterate just shockingly faster that way.

GC:  Who is doing that [continuous deployment]?

WB: Etsy for one… we are, obviously…most small start-ups like Github for example, are really good about it. But I think you’ll find most start-ups are. In an IT setting, I don’t know anybody who is. I think what’s still prevailing is that slow scheduled scrum where people could convince themselves that 2 or 3 week releases is state-of-the-art fast for IT. From what I’ve seen when I go and meet people and talk to them, they’re like, “Oh yeah! We’re goin’ fast… we’re doin’ agile! I release every 3 weeks!” I’m like, “Yeah, I did 20 this morning!” It’s because the automation of the cloud lets you get there. If a release is annual and people have to copy stuff and it’s all on servers and go through their big ritual, the cost and sort of psychic pain of doing a release is going to be high. In cases like this, you can very quickly talk yourself out of doing it very often if it takes hours to do it every time. But if it takes no time, you do it constantly.  So that’s the thing that people and most IT shops don’t really understand about cloud is that it’s not about, “Oh, where do I put my computers and where is the security coming from?” It’s about programmable infrastructure and going fast.

By Jake Gardner

Posted on November 6, 2012 in Cloud Computing Industry

Share the Story

About the Author

Response (1)

  1. […] is the second of 2 parts. Check out Part 1 here. Let me know your thoughts on Twitter @CloudGathering. Will Ballard: CTO at Gerson Lehrman […]

Leave a reply

Back to Top