Is cloud having a negative impact on IT spend and jobs?

I have always been a performance and efficiency geek. Throughout my career, I have worked on making software run faster and more efficiently, on automating many development and business processes, and on helping organizations increase their application delivery agility. Occasionally, I encountered resistance in increasing efficiency, typically within large organizations where people were worried about job security, but I rarely thought of any downsides to this efficiency-enhancing work.

When I was in the Israeli Air Force, we made a game-changing enhancement to developer productivity. We offered it for free to our US supplier, since we knew our work would benefit future deliverables. I was quite shocked that the guys who had been doing the same work for many years had no interest in adopting a better way. Clearly being more productive can threaten job security, which seemed to be the reason for their lack of interest.

Later, when I spent time working with a variety of organizations to adopt a continuous delivery methodology, I found many had an interest to increase efficiency but I also met some significant resistance from the IT ops teams, who had no desire to move towards a DevOps culture.  Possible reason? Some of the teams didn’t want to automate themselves out of a job. They clearly hadn’t heard one of my favorite sayings, “the most irreplaceable people are the ones that make themselves replaceable.”

Currently I work for Amazon Web Services and we strive to maximize value delivered to customers by delivering the best, most cost-effective services. We are able to do that by leveraging innovation and economies of scale, and delivering unprecedented value at customer-friendly costs. This model is different than what we have seen in the past. There is significantly less hardware waste in this model, because we are able to dramatically improve hardware utilization and so the cost savings go back to the customer. There is no shelfware in this model, since customers only pay for what they use and therefore incur less wasted spend. And customers benefit from not needing to have their IT staff manually handle many tedious tasks.

But this does beg the big question of whether cloud will erase a huge amount of IT spend and jobs, due to increased efficiencies and lower costs. Is progress actually having a negative impact on my field and am I contributing to its demise? Am I innovating myself out of a job? Should IT go back to buying hardware, manually plugging in cables and buying, installing and self-managing software?

In the past couple of years, I encountered a similar question related to PHP. We were getting ready to release PHP 7, the most widely deployed Web development language. The new version promised to be at least twice as fast with significantly lower memory usage than prior versions. On average, one needed at most half the machines (and often fewer) to drive the same amount of workload. At the time, we were collaborating with Intel on performance and efficiency enhancements and one person was pondering whether the overall server monetization opportunity (software & hardware) would be negatively impacted.

An Intel engineering manager pointed to the Jevons paradox. This paradox occurs when technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises because of increasing demand. In fact, consistent with the Jevons paradox, the manager suggested that with PHP being so much more efficient, companies would likely use more of it, derive additional use-cases for the language and therefore cause the overall applicable market to grow. Initially, I had to scratch my head; it was easier to think that 2-3x the density on a given server meant at least 50% fewer servers! But the more I read about the Jevons paradox, the more I realized there was a credible case to be made for both performance and efficiency progress (in our case typically 3x more throughput for a given server).

I am convinced that the Jevons paradox holds true for cloud computing and the value we deliver at Amazon Web Services.  By delivering not only significantly improved resource and cost efficiencies and also making the resources easier to use, we see IT consuming more software and hardware than ever before. The cost of entry is low, so organizations are able to test and adopt new capabilities quickly, without long and tedious decision cycles and without the risk of wasted shelfware. In addition, we are also seeing customers build out new use-cases that were previously difficult and/or typically reserved for only a select few companies. New use-cases are driving an increase in IT innovation both in breadth and depth, which adds to increased consumption.

But what about the IT staff with job security due to waste and manual work? The growth in cloud adoption, increased consumption and use-cases appears to be driving demand for even more talent that can leverage these new technologies and fully automate the next generation of applications; there’s plenty of work!

All signs signal more vs. less demand for IT talent and increased resource consumption. This is an amazing time to be in IT!

Disclaimer: The opinions in this post are solely my own and do not represent the opinion of my employer. My opinions are based on what I see in the broad industry and my own personal experiences.


Why I am joining the Amazon Web Services big data group

Today is my first day in the AWS data services team, and I am very excited to be starting this next chapter in my career.

After being immersed in co-founding and managing a start-up, the transition to AWS was a very significant decision for me. I love to embrace change and take risks, but joining a company the size of Amazon would be a very different experience! I realized I had to do some deep thinking. In this blog post, I’ll shed light on some of my thought process and the overall opportunity that lies ahead.

Cloud infrastructure adoption is at a tipping point

Data point #1:

Back in May 2008, I gave a keynote presentation at a developer conference. 25% of my slides were devoted to that “next big wave”: the cloud. My call to action was for developers to leverage cloud services and seek out new business opportunities in cloud services delivery. While I got good questions and coverage around many topics, there was little engagement on the cloud: it was still too new…

Data point #2:

Fast forward to last month, in my prior role, I met with one of our Fortune 100 customers. For this company, web- and mobile-facing experiences are critical to driving growth in their business, so they have a strategic focus on accelerating the pace of innovation while also ensuring compliance, scalability, manageability, and managing cost. The customer described how they made the strategic decision to move all their infrastructure to the cloud, specifically to AWS!

What a difference eight years makes!

Yes, eight years is an eternity in technology, but what excites me most is that I believe we are actually still at the very beginning of a massive transition. Sure: today, most enterprises are using the cloud in some way, but it’s still early days: businesses haven’t yet made the overarching strategic decision to move most of their IT operations to the cloud.

The data “center of gravity” is moving to the cloud

Why hasn’t this happened yet?  I believe that cloud adoption has been hindered by the majority of business data still languishing on-premises. This has been especially true for the systems of record (typically but not exclusively SQL-based), such as ERP, CRM, data warehouses, business applications, and more.

As enterprise applications move to SaaS and the majority of new data is generated in the public cloud (e.g. IoT, e-commerce, mobile), the center of data gravity will move outside the firewall.  As this transition accelerates, the demand for cloud data services will grow substantially, because it is cheaper and more effective to bring compute to the data, rather than the reverse…

AWS has established clear leadership with services such as Redshift, RDS, ElastiCache, and DynamoDB. But these services (and others) are just scratching the surface. In AWS, I have found an organization that is willing to push the envelope and take risks to capture the full cloud data services opportunity.

Insights and context delivered with “real-time” big data

Both B2B and B2C applications are going through a radical transformation, not only becoming more mobile but significantly more contextual in nature. Previously, I have emphasized the critical intersection between cloud services, mobility and context.

Consider the following two customer use cases I have personally witnessed:

  • When you and your kids walk into Disney World, the mobile application automatically detects you, modifies its user experience to be personalized to your profile, location and what’s currently going on in the park.
  • A CIO of a large trucking company is interested in transforming logistics by leveraging mobility and real-time context – optimizing trucker routes and data collection based on real-time inventory changes, traffic, driver time spent on the road and more.

Context-aware use-cases are becoming pervasive in every industry. Context can be deduced from many sources including location, device sensors and data (IoT), operational databases, personal computing devices, cloud data services such as weather forecasts, and more.

As personalization with increased context becomes more sophisticated, the challenges in delivering these contextual real-time insights become more significant. At AWS, I will be part of a team with a major focus on delivering real-time responses to enable the next generation of apps and services. The challenges are multi-faceted and include: in-memory databases, scalability and high availability, different database paradigms (not one-size-fits-all) and strengthening developer productivity to support the required time-to-market.

A culture of innovation (and invention)

Coming from the startup world, I have a bias to focus on value innovation and timing. In startups, there is little correlation between building a successful business and inventing.  To better understand this point, read Tom Grasty’s article “The Difference Between ‘Invention’ and ‘Innovation’.”

If invention is a pebble tossed in the pond, innovation is the rippling effect that pebble causes. Someone has to toss the pebble. That’s the inventor. Someone has to recognize the ripple will eventually become a wave. That’s the entrepreneur. Entrepreneurs don’t stop at the water’s edge. They watch the ripples and spot the next big wave before it happens. And it’s the act of anticipating and riding that “next big wave” that drives the innovative nature in every entrepreneur.

What attracted me to AWS is that (in addition to very smart, capable and motivated people) it appears to effectively balance innovation and invention: a focus on customer value with a bias to action. True innovation happens when different technologies and ideas are integrated to deliver a superior, differentiated customer experience. Teams are heterogeneous and have a broad set of academic and practical backgrounds with a startup mentality of getting the job done.

I can’t wait to get started! Looking forward to tackling big problems, working with super smart people and making big impact!

Oh, and we are hiring! Looking for great talent who want to be part of reshaping IT. Seasoned engineering leaders, product managers, and great software engineers. Distributed systems, high-performance in-memory compute, and more… Contact me!

New journey

Starting a new journey…

I’ve decided to pursue a new chapter in my professional career.

While this is the right decision for me, it was a very emotional one. Every day we help change the world at Zend, the company I co-founded with Zeev Suraski. PHP today runs over 50% of the web, and I’m proud of the role we played in making that happen. A number of years ago PHP crossed the chasm, and at Zend we now serve a large roster of enterprises running business-critical PHP applications. Some of the biggest brands on the planet! (cartoons, fruit, and many more…)

We have also changed the game in many other ways. With Zend Framework we raised awareness for enterprise frameworks and best practices in the PHP community. PHP 7, which was recently released, is providing amazing tailwind to ongoing PHP adoption. Many companies talk about their mission and impact, yet very few people have the opportunity to participate in a company which helped transform a market as big as the Web. I couldn’t be more proud of what the great teams at Zend have accomplished over the years.

Last year, Zend was acquired by Rogue Wave Software, a company whose software we’ve all touched at some point in our lives. In fact, I used SourcePro as a C++ developer working on critical avionics-related systems. With the virtually identical mission statements, it made perfect sense to join forces in what is a very natural fit.

The past months at Rogue Wave have been a great experience, very rewarding and welcoming. Rogue Wave offers Zend more significant enterprise reach, and the company’s strong open source capabilities and community contributions to Linux and other popular OSS projects strengthens the Zend open source capabilities. I am excited about the potential for both PHP and the Zend enterprise offering going forward!

I’ve enjoyed every minute of the ride and at the same time accumulated more gray hair. My new chapter affords me the opportunity to focus more of my time on product, something I love and am passionate about. I am looking forward to working on new problems that make an impact for many.

Needless to say, my bonds to Zend and the PHP community are, and remain, strong. I will continue to be a vocal supporter of the Zend products and the team. I will continue to seek out opportunities to add to my elephpant herd (by far, my children’s #1 concern with me leaving). And I will continue to contribute to the PHP community and be an ambassador for PHP.

And last but not least, a special thank you to Zeev Suraski, the Ze in Zend and my (almost) life-long partner and friend who kicks ass. I wish most people to be as fortunate as I was in having such a great person share the roller coaster ride of building a company.

This is not a goodbye, but a see you in the neighborhood!


Tackling the #1 employee-leadership divide: lack of communication

The most common employee complaint is lack of sufficient communication within the organization. In talking to leaders from a variety of companies it has become clear to me that this is an ongoing issue for many. Not only does this feedback manifest itself in demotivated employees but some of the longer term impacts include ineffective cross-functional collaboration, less creative problem solving and eventually increased undesired attrition. Most organizations do identify the problem through a variety of methods including employee feedback surveys, exit interviews, and one-on-one conversations.

When the “lack of communication” feedback tends to repeat itself enough times, it does usually escalate to executive leadership as a strategic organizational issue. As a result, actions are taken to significantly increase the communication within the organization. It typically starts with the CEO stepping up information sharing in all-hands calls and creating additional opportunities for sharing more detail re: strategy and organizational change. In addition, each functional leader makes an extra effort to pick-up the pace around communications – more update emails go out to the team and there is an increase in transparency around plans and changes.

Leadership is feeling better. They have really stepped up their communications and have made an extra effort. But the negative feedback continues to trickle in… Lack of communication continues to be the feedback de-jour. One digs deeper to try and understand whether this feedback may be coming from a part of the organization that has been left out or forgotten. No! It’s coming from people who just last week were fully briefed and communicated to, in person, around the plans… Surprise quickly becomes anger, frustration… How can these people still feel they have not been communicated to?

I have seen this scenario repeat itself many times, in different settings. As a general rule of thumb this is a leadership and not an employee challenge. In fact, many times this breakdown actually happens within the management layer itself when first and second line managers feel they are not being effectively communicated to. My conclusion is that the term “communication” itself is too broad and ill-defined which leads to ineffective conversations around communications and which primarily focus on the information sharing aspect of communications.

I now try to clearly differentiate between the term “communication” and “engagement” to bring more clarity into the discussion. Embracing the differences and acting on them can completely change the game for an organization.


I believe when management is communicating, they truly are doing just that, communicating the facts and sharing information. This sharing tends to be very unidirectional (emails, all-hands calls, team meetings) and with limited or no opportunity to converse. And no, most employees and first-line managers don’t typically ask the juicy questions in front of everyone when this form of communication is going on. This style is one focused on information sharing, and often even done well with a fair bit of transparency re: plans and challenges. Management really is working hard to make this form of communication effective. It’s not due to a lack of trying.

The problem with this form of communication, which is important but insufficient, is that it does not address some key areas that any high-performance organization needs to address. Such areas include creating opportunities for joint problem solving, jointly refining ideas and plans, creating dialogue across two or more functions, and most important, this form of communication does not enable enough dialogue for people to “disagree and commit” when tough decisions are being made.

Decisions in an organization need to be made on a daily basis, quickly. Everyone understands that and employees want decisions to be made. In most cases, they don’t even mind if it’s a sub-optimal decision as long as they can understand why the decision was made. This form of communication can quickly lead to a “management doesn’t know what it’s doing” perception. Why? Because they are seemingly not engaged with the people who have the knowledge, and therefore, decisions come across as quite arbitrary.


Engagement, while arguably a form of communication, is about bringing the team along on the journey. It is not about feeding them with information (although that is also important) but rather ensuring that on an ongoing basis they are part of the conversation re: how the organization moves forward. At the core of engagement is to create as many opportunities as possible for bidirectional communication (vs. unidirectional communications). It needs to include many discussions, asking for opinions, soliciting feedback and more. Or said differently, communicating in a way which involves people and makes them a part of every day decision making and change management.

Problems should not only be shared but should be jointly tackled involving the appropriate players. And these players do not need to all be in the same function but cross functional involvement should be encouraged where possible. In fact, as result of the right level of engagement in the organization you quickly benefit from management actually owning less problems on its own, there is an increase in cross-functional communications and alignment, and employees and their managers have enough visibility into the decision making process so they can more easily “disagree and commit” when they don’t agree with the decision.

The last point on decision making is very important. Being an engaging leader does not mean that the organization becomes a democracy. Not at all. Decisions need to be made and often these are difficult decisions. But engagement does ensure that those decisions have the best possible chances to get commitment and are implemented quickly and effectively.

I have put together a small table that in a simplistic way summarizes some of these points to call out some of the key differences between my view of communication vs. engagement:

communication vs. engagement

The path towards engagement

Transitioning an organization from primarily communicating to also engaging is not an easy task. I think the first priority needs to be to clearly define as a team what engagement means for your organization. Try and stay away from a catch all communications discussion or you’ll fall back into the trap where some leaders (executive and mid-management) will think their job is done when they have communicated information. That’s why I like calling it something different because it helps crystallize to the team the expectation for day to day engagement with managers and employees and with that earning their trust and commitment.

If one key leader is not able to make the transition, then it can all break down due to the negative cross-functional implications. Some leaders will naturally gravitate more towards engaging than just communicating. It is actually quite easy to spot those leaders. You will typically see a much higher sense of loyalty, employee retention rates, cross-functional bonds and employee satisfaction within their organization. So can leaders who are primarily communicating be taught to be engaging? There will always be the leaders who are self-aware enough and are able and willing to learn and grow. You may find some never make it and that should become apparent quite clearly at which point the organization may need to decide they are just not the right fit.

Has your organization transitioned towards a more engaging operating style? Please share your experiences in the comments.

Do not loose touch with the buyer

Is technology distancing us from the buyer?

The breath and depth of tools in the marketing (and sales) technology landscape is exploding. Many of you may have seen Scott Brinker’s overwhelming Marketing Technology Landscape graphic. Gartner predicts that by 2017 the CMO will spend more on IT than the CIO. There are a number of key drivers for this change.

We have all heard the statistics that an overwhelming amount of buyers do their research online before completing an offline purchase. This means that a major emphasis of growth marketing is focused on being discovered and educating the buyer online, ultimately in the hope of generating a high-value MQL (marketing qualified lead). Just this one sentence represents hundreds to thousands of tech companies focused on marketing automation, SEO/SEM, content management and distribution, predictive lead scoring, and many more categories.  In addition, these systems then need to integrate with the sales IT systems. The CMO today is not only being bombarded from all directions by hundreds of vendors but also has to answer to boards and the executive team on inbound marketing strategies and metrics.

The pendulum has definitely swung from traditional direct and in person marketing towards content and education. Sales leaders are required to deliver detailed, real-time metrics of lead conversions, opportunity creation rates, ASP, yields, pipeline growth, churn, and more. While I believe the pendulum needed to swing, I question whether management teams are not over rotating the other direction and distancing themselves from the buyer by focusing too much of their time on the sales and marketing technology implementations vs. spending time learning from and selling to customers.

Some things to consider:

  1. Lead scoring (predictive or not) tries to ensure sales reps are spending their time on higher quality leads and enjoy higher conversation rates to opportunities. This does not mean there isn’t opportunity in the non marketing qualified leads but there is an assumption that it becomes progressively harder to find the needle (opportunity) in the haystack (raw leads). At Zend we had seen many of our lucrative deals come from leads who had been in our database for quite some time and had just never been qualified in. So while not always possible, it may just be more impactful to dig deeper into the lead pool and find cost effective ways to do so. Less time on tuning the system and guesswork, and more time on picking up the phone and truly qualifying leads by talking to people.
  2. Digital marketing promises an abundant amount of leads at a fraction of the cost of more traditional approaches such as events. I do believe digital marketing is critical but in certain (not all) markets we have seen strong results from in person events. Typically, a prospect’s commitment to a conversation and follow-up is higher in an eye to eye encounter. It also enables the sales reps to much better qualify the tire kicker vs. the prospect who truly has interest and influence to move things forward. So before you abandon in person marketing opportunities think it through carefully.
  3. Meeting customers (new and existing) in their offices is invaluable. The ability to truly understand motive, environment and the various stakeholders goes way up. It also makes it way easier to get honest feedback from customers and build a personal relationship which can contribute at many levels e.g. getting the customer’s help to get a deal in before end of quarter, get a customer’s commitment to contribute time to being a design partner on a new feature and more… We consistently saw that end of quarter deals were more likely to come if there had been some face to face connection with the key stakeholder. Again, the more time is spent in the office tweaking the systems the less leaders are on the plane spending quality time with the customer.

Don’t get me wrong. I am a technologist, very metric driven and I absolutely believe the pendulum needed to shift towards more online engagement and education. I also believe that sales and marketing leaders need to be held accountable for both the forward and rear looking business metrics. But I do believe that sales & marketing leaders are spending less and less time with customers due to the increased overhead of implementing systems and reporting on the metrics.

There is no better way to gain new customers and learn how your existing customers view your value-proposition than human-to-human interaction. My advice to sales and marketing leaders is to embrace technology but don’t let it outright consume you. Carve-out a sustainable amount of your and your team’s time to make systems improvements but ensure that time is well spent and managed. One of my past board members would say “You can report on the news or you can make the news”. I prefer to make the news!


Freemium and open source business models. Friends or Foes?

Basics of the freemium model

The freemium model started being popularized in the 1980s, frequently under the term “shareware”. The term freemium model became better known in the past decade. Freemium stands for the combination of “free” and “premium”. It refers to a licensing model that starts at free in the hope to gain broad traction and then convert a subset of users to a for-pay, premium version of the software or service.

In the era of information overload and too much choice, the freemium model promises an awareness and marketing boost by enabling a broad and lower friction adoption model. Generating the necessary pull to make a freemium model work is critical and requires providing enough value in the free edition to drive broad adoption and preferably word-of-mouth based adoption.

To then encourage free to paid conversions it is important to expose some of the value-add premium capabilities within the free product. This is usually accomplished by exposing premium functionality with some limitations. The ability to use the product perpetually for free is what makes a freemium model different from a free-trial model. In a free-trial model the buyer knows they will need to make the buying decision within a matter of weeks.

Making a freemium model successful

The trickiest aspects of making a freemium model successful is to get a number of levers right:

  1. Provide enough value in the free version to ensure broad (preferably viral) adoption.
  2. Have clear cross over points from free to paid so that the target prospect will have strong motivation to pull the trigger on going premium. There are many ways to differentiate free and paid including storage space, bandwidth consumption, features, support SLAs, and capacity.
  3. Implement effective premium tiering to drive up the average selling price and tailor to multiple prospective buyers and use-cases.

To motivate a content free user to move from free to paid it is important to have big, must-have reasons to make the purchase. Preferably the user hits some limitation which is important to them which creates an impeding event. Incremental motivating drivers (should-haves or nice-to-haves) may not create enough urgency when the free version is “good enough”.

While in the free-trial model the trial expiration is a hard limitation which forces a buying decision, the same does not exist in a freemium model. This is why freemium models tend to work best when there are some significant hard, must-have reasons to upgrade such as more storage space needed (e.g. Dropbox or Box), data volume limits (e.g. Splunk), and number of servers to meet a needed amount of capacity.  When such clear reasons exist it tends to be easier for the vendor to give away more features for free to drive the broad adoption of the product. Also, in such situations there is less cannibalization risk when the target audience are the ones making significant use of the product.

The quality of implementation of the freemium model within the product is also a critical success factor. Not only is it best to find ways to expose the value-add within the free version but there should also be a smooth, low friction path from free to paid. To ensure that happens, the product itself should be upgradable, preferably in product, so that the user can continue their work right away. This is easily achieved when delivering a software-as-a-service, but in the shrink wrapped software world it typically requires the free version to ship with the premium features ready to be turned on with a change in license key. Needless to say it can get quite tricky to implement an effective freemium model within the product and there are many nuances making it successful.

Freemium as part of an open source business model

Monetizing open source projects via freemium models is even more complicated. Many open source vendors monetize by developing proprietary value-add functionality to the open-source software. The value-add in conjunction with support SLAs is what drives conversions from open source to paid. The open source business model on its own is not a true freemium model as the open source project usually does not expose the vendor value-add. In a significant amount of cases the open source vendor is not the one who controls the distribution of the open source project and/or cannot for political reasons bundle proprietary value-add within the open source project. Therefore, the open source software is not equal to the free version in a traditional freemium model.

Hence, in order to implement a true freemium model in an open source market the vendor often needs to have a “community edition” (CE) which is a free, value-add version of the open source project. The goal would be for the CE version to be the way the user adopts the open source project and at the same time expose some of the premium value-add of the premium version.

However, this exposes a number of challenges which are unique to freemium models as part of an open source business model:

  1. There is one more level of differentiation that the vendor needs to tailor to which can ultimately dilute the value-add of the premium version. The vendor is not only contributing functionality to the open source project to ensure its ongoing success (the foundation of the business) but also, in addition, they need to give away functionality that differentiates the CE version from the open source project to create enough motivation for the user to obtain the OSS project via the CE product. In this open source differentiation model there is one additional “free” tier that needs to be differentiated which means that the vendor’s value-add functionality is now spread across three tiers (open source, free and premium) which makes it harder to retain enough must-have value for the premium editions.
  2. As noted earlier, the most effective premium value-adds in freemium models tend to be related to capacity whether storage, servers, bandwidth or other hard limits on workload or data. These hard limits create a compelling event where the target user needs to make a buying decision. However, in an open source model the often most effective workload related limitations are irrelevant because the base open source project can be deployed in an unlimited fashion, think of MySQL (unlimited storage), Hadoop (unlimited compute), or Lucene (unlimited indexing). This is probably the biggest reason why freemium models are difficult to implement in an open source market. There can be workload related limitations to the proprietary value-add (e.g. APM limitations, servers under management limitations) but the base workload which has made the open source project so successful cannot effectively be limited as they are not limited in the open source version.

While I am not saying that a freemium model can never work in an open source business model, most of the successful open source companies have implemented some form of free trial model vs. true freemium model.. This approach creates clearer differentiation between the premium product and open source project. This is especially important as workload based limitations do not typically work as well in open source freemium models so all the possible value-add needs to be focused at differentiating the premium version from open source vs. diluting the value-add with a second free version between open source and premium.

By example, in 2003 Red Hat eliminated Red Hat Linux (their free distribution) in favor of Red Hat Enterprise Linux (subscription-only binaries).  With that they eliminated that middle free tier between open source Linux and a premium Enterprise offering. There was no longer such a thing as a commercially-blessed but not supported Red Hat distribution. That change enabled Red Hat to become a billion dollar, high margin company. Red Hat must have realized that it is too difficult in an open source model to differentiate your product twice. It worked!

All this does not say you cannot build a very lucrative business on open source. Many have done so including Red Hat, Cloudera, MySQL, Zend and others, but I do believe the freemium model may not be the best fit for many open source companies. In most cases, open source companies will be best off focusing on two initiatives – making the OSS project successful and concentrating their value-add on the premium offering. And in cases, where it makes sense to deliver some of the value via a service, even better. In those cases, workload capacity can be an effective limiting factor.

Would love to hear your thoughts in the comments below. In what cases do you believe a freemium model does or does not work as part of an open source business model?


Buyer personas: the often missed ingredient to product roadmap planning

Product roadmap planning is one of the trickier product management tasks. The product leader’s role is to balance all the different product roadmap pressure points and focus on maximizing the business outcome. The number of variables the PM needs to take into account are substantial. This makes the job that much more difficult especially as engineering resources tend to be constraint.

Some of the contributing factors influencing how product leaders will weigh priorities include:

  • New customer acquisition vs. customer success focus. product investments targeting new customer acquisition will typically be somewhat different then investments targeting customer retention. Of course, you’d like to focus on areas which benefit both but that’s not always possible. For example, that new addition that really helps demo well and show instant value will not necessarily be important for longer term customer retention.
  • Bugs and feature requests. Typically, these requests directly come from customers via the support or the technical field organizations and could fill up the whole roadmap for many years to come. Also known as “the bug tracking system”.
  • Longer-term product vision and strategy. Typically, the CEO and other key leaders in the company have a long-term view for where the company needs to go to maximize market opportunity and outcome. Such investments will often be in conflict with short-term pressure points.
  • Competitive dynamics. There are situations where competitive dynamics really come into play but often they are not the primary reason why the business isn’t growing faster. There is risk of thinking too much about competition although it can be important if you’re clearly and consistently losing to the competition on a very clear set of capabilities.
  • Technical debt. No one except for the product people care about this category. But we all know that if ignored for too long, technical debt can lead to major velocity and longer-term customer satisfaction challenges.
  • Large customer requirements. Often large customers take your product to new limits. As part of that there may be recurring asks from the largest of customers which are not as relevant to the broader customer base. These are often tricky requests and need to be weighed very carefully as they may require a lot of investment for a small set, yet high paying customers. Weighing opportunity cost against benefit is key.
  • Broader market dynamics: The market is always shifting in new directions. Short-term such investments typically won’t pay off in revenue, but if you don’t get ready today you will be left behind. For example, companies who were proactive about Cloud vs. companies who were late and reactive and are now playing catch-up. Also known as getting stuck in the innovator’s dilemma.
  • Expanding use-cases: There are often ways to make adaptations to the product that can broaden its applicability into new use-case scenarios. For example, when Splunk moved from being primarily an IT-centric log analytics engine to broadening the same technology to machine-generated data and tailoring it to additional non-IT use-cases e.g. sales & marketing metrics. This can fall into the longer-term product vision category but I think one that deserves a separate callout as it is often less about making a big technology investment but rather a minor investment plus repackaging and repositioning. The go-to-market impact may be a lot more significant.

And the list goes on… Needless to say product roadmap planning is a very challenging effort and needing to weigh the short and long-term effectively within the constraints of finite engineering resources is often the product leader’s biggest challenge.

There is one aspect of product roadmap planning which I feel many product leaders leave out but should not. That is matching up buyer personas with the product roadmap. For most products, there is not just one target persona. While not every persona is the decision maker it becomes critical that there be value for each influencing persona. Therefore, constantly weighing the product value by persona will also give the product leader a good idea of which personas are being well taken care of vs. others who may not perceive as much value in the offering. While many companies try and map out their target personas this is typically done in a very shallow way and ultimately is not used to truly weigh product value and roadmap investments. It tends to be used more in targeting exercises by marketing and sales enablement tools.

Some key things the PM should know about the target personas include:

  • What flavor of titles they may have?
  • What some of their characteristics are? Their background and thought process, what they care about, how they typically spend their time, what makes them look good in front of their bosses?
  • What their key concerns are? What do they need to get done? what do they need to improve? what keeps them up at night?
  • What is the value you currently offer to that target persona? If you are very honest with yourself, in a product which requires buy-in from multiple persons, you will never have every feature be valuable to every persona. It needs to be crystal clear how the product value intersects with their concerns and ensure that there’s perceived value there.

At Zend, the company I co-founded, we spent time mapping out the value per persona. Our PHP application server, Zend Server, has a number of personas that need to buy-in to the product. This includes the head of development, the production operations team and developers. We are a very developer-centric organization and have invested in developer tools for many years. However, as we went through such an exercise it became very clear that we had very strong value for the head of development who was constantly looking to further professionalize and streamline application delivery (our decision maker) and we had strong value for the production operations team thanks to strong management, security, compliance and DevOps capabilities.

But as we mapped our capabilities onto the developer it became clear to us that while we had huge value to production applications, we were a bit lighter on the “what’s in it for the developer?”. It was very interesting given developers are our #1 lead source due to the successful developer tools we deliver to the market. But Zend Server was not primarily designed as a development tool but rather a runtime capability to best support running business-critical applications.

With this learning and the fact that developers are extremely important to us we recast our 12-month product roadmap to prioritize value targeting the developer persona and reduced the amount of investment in enhancing our production features. While we did not reduce the production investments to zero, we significantly reduced them for a limited period of time in favor of developer value. With a very clear and immediate focus to add more developer-value and leveraging our deep technology expertise in the PHP runtime we designed, a new product capability called Z-Ray. Z-Ray’s goal is to change the game when it comes to developer productivity and code quality. Deliver in-context insights into developer’s code while they are developing it without requiring them to change how they work. We developed this capability in an agile manner with customers in the mix from a very early stage (alpha). The ongoing customer feedback made a good idea great, and within nine months we shipped the killer feature.

Z-Ray has made a big impact on our customers and the enthusiasm by the developer persona. In addition, we successfully tied the capability into the DevOps and production cycle so that it also adds value to the other target personas including head of development (benefit: team productivity and code quality increases), the DevOps role (benefit: reduce friction throughout the DevOps cycle by tackling errors early on in the development and testing stages) and the production ops role (benefit: Z-Ray has production capabilities and it also helps eliminate many errors going into production, hence, less nightly awakenings).

But this article is not about Z-Ray. It’s just one example of many on why weighing product value and product roadmap against the target personas is as important of an exercise as weighing against the many criteria listed above. Without clearly understanding the value delivered to each target persona, product leaders will not effectively balance product roadmap investments and will lead sales and marketing astray on how they communicate and engage prospects.

My previous article on the product marketing discipline emphasizes how important it is these days to ensure communications have the right level of depth and specificity to truly get attention in the market place. Getting the personas and delivered value right is a big part of making that happen!