Cisco and HP go head to head in network automation

October 24th, 2014 by AnnMarie Jordan

Nikolay Yamakawa, Analyst for TheInfoPro

Cisco and HP lead the race in large- and midsize-enterprise network automation, gathering the highest number of selections for in-use implementations and future project plans. HP received the most citations for current use, but is closely followed by Cisco, which has enough in-pilot/evaluation projects to put itself ahead once these plans are converted into implementation. If the two incumbents convert all their future project plans into use, the two giants will gain almost the same number of implementations, further intensifying the rivalry in the space. SolarWinds and homegrown approaches, which share third place in use, are other alternatives gaining traction in network automation.

Network automation enables new efficiencies that about 27% our study respondents are already taking advantage of in 1H 2014, in addition to 8% reporting future project plans. With network automation, end users don’t have to wait for staff to complete outstanding tasks before starting work on new requests. The technology can improve service, lower costs and reduce security risks. Spending intentions remain positive in 2014 and get stronger for the technology in 2015. About 10% of decision-makers are increasing spending on network automation this year, and 13% plan to do the same next year, while those with budget cuts remain constant at just 1% for both years.

Network managers should evaluate existing environments to see what approach to network automation is most suitable given existing assets, employee skill sets and strategies. Cisco and HP have gained traction in the space, and we expect the rivalry to continue in the future, but as IT environments evolve, increased competition from other vendors may follow.

TIP-Thurs-Net-100914-pic

Network managers had the following comments about network automation:

  • “Very interested in it. It would help us a great deal. It would be included in one of our growth projects.” – LE, Energy/Utilities
  • “It’s more from a compliance perspective, that’s where you’ll see them [network automation] playing a major role; it compares and looks at configurations from a security perspective.” – LE, Other
  • “I wish it [SDN] was in plan. That would allow us to do some better automation. There’s always this desire to have a self-service portal where people can go and say, ‘I need this service and this application on my PC.’ It would be really neat if more automated tools were in place to build those environments.” – LE, Consumer Goods/Retail

Top considerations when selecting workload execution venues

October 22nd, 2014 by AnnMarie Jordan

Peter ffoulkes, Research Director for Servers and Cloud Computing

In our newly published Wave 7 Cloud Computing Study, respondents were asked about their ‘likely primary’ execution venues for a wide range of workload categories in the next two years, including non-cloud and the variations between private/public, on-premises/off-premises, internal or hosted, and their reasons for those choices. When the results for each workload category are aggregated by cloud type, 30% selected the primary deployment to be non-cloud, 34% to be private cloud, 9% hybrid cloud and 27% public cloud within the next two years.

Top factors considered in selecting the best workload execution venue

Regardless of the preferred primary execution venue, whether non-cloud, private cloud, hybrid or public cloud, the top five criteria for selecting a preferred execution venue are security, cost, agility, functionality/ease of use and compliance/governance at 28%, 20%, 13%, 9% and 8%, respectively.

TIP-Thurs-Cloud-100814-pic

When looking at the detail of workload category distribution by cloud type, the more traditional types of enterprise workloads are weighted toward non-cloud, private and hybrid cloud deployment venues in the next two years. For public cloud vendors, collaborative and cloud-native applications offer the best opportunities.

For every organization and industry, there are unique considerations that influence the ultimate choice of execution venue if the same top considerations are a major part of the decision-making process. Anecdotal commentary illustrates the range of circumstances expressed by TheInfoPro’s respondent community:

  • “One of the obstructive factors to this is encryption and data security. We’d like it to be public cloud, but in reality it will be mostly off-prem non-cloud hosting. And customer requirements [are a factor].” – LE, Consumer Goods/Retail
  • “Moving into hybrid – cost savings in this category. Most back-office we’re keeping in private for integration and reliability.” – LE, Education
  • “For bursting compute capability; more elasticity.” – LE, Energy/Utilities
  • “That was one of the prime reasons they went that way, to Amazon. Quite frequently, it’s not so much a project; they’re doing an analysis. They just want a whole lot of compute power to attack it to get a solution in a reasonable amount of time. It is platform as a service, if you will, pops up and goes away when it’s done. No ongoing costs, no paying for it up front.” – LE, Healthcare/Pharmaceuticals
  • “Some of that’s coming back in-house. We do use a lot of third-party e-commerce products at brand level today. We’ll bring that back onto our enterprise platform.” – LE, Consumer Goods/Retail
  • “New next-gen apps will more likely be deployed in public cloud with linkage back to internal systems, for scale and speed, but marred with security issues.” – LE, Financial Services
  • “Depends on the application. Non-high-value data would be off-premise; high value would be on-prem.” – LE, Industrial/Manufacturing

IBM, EMC and HDS battle for SDS’s key technology: storage virtualization

October 20th, 2014 by AnnMarie Jordan

Marco Coulter, Research Director for Storage

Storage virtualization warrants consideration by enterprises, not just because it is top five on TheInfoPro’s Heat Index, but more so as storage professionals see it as a pre-requisite for software-defined storage. The technology sees rejuvenation in 2014. IBM continues a multi-year run as most selected vendor for production deployments with its SVC offering. This year, EMC pulled ahead of HDS to take second place. The first vendor with an exclusively software offering, VMware, received increased production selections but stayed in fourth place.

Plans for the coming 18 months foreshadow a different picture. Enterprise plans for the technology increase, and leading vendors will share the bounty, though not equally. EMC again captured the most selections through multiple offerings. EMC’s VPLEX receives the most narrative mentions, yet some are still running the SAN switch-based Invista. Others are waiting for ViPR to be clearly defined before making a choice.

TIP-Thurs-Stor-100214-pic

As software-defined datacenters find a definition among IT pros, storage professionals are settling on storage virtualization as a key technology for software-defined storage.

Storage virtualization is not new, and the technology’s history may end up an inhibitor to adoption. Several narrative comments mentioned failed attempts with the technology leading to reluctance to re-engage. A criticism of prior storage virtualization models was the resulting environmental complexity. The components came from different vendors, and would refresh or update with different lifecycles, making maintenance, diagnosis and refreshes complicated. This complexity remains today, as IBM’s SVC, EMC’s VPLEX and HDS’s VSP all require specific hardware.

Hopefully software-defined storage will deliver applications an opacity to this underlying complexity of storage virtualization. Today, the technology is ironically a hardware battle.

  • “We used to do that, don’t do that anymore. Because it wasn’t an integrated thing, you had to buy third party – it may not work right. The stuff went into obsolescence, but the storage didn’t. Wasn’t all in one package. We want one package, don’t want to deal with multiple vendors, don’t want them to go out of support at different times. Too many parts to work together.” – LE, Industrial/Manufacturing
  • “We tried that two years ago. That’s a question mark. Complicated, didn’t see enough value in it, big need for it. Storage people are not pushing that hard.” – LE, Transportation
  • “IBM SVC – storage virtualization enables easier data migration.” – LE, Services: Business/Accounting/Engineering
  • “Refreshing storage virtualization – from Invista to VPLEX.” – LE, Telecom/Technology
  • “[We use] IBM SVC. We’re on 6.3. Storage virtualization is a big deal with us, becoming a big VMware shop. Connecting virtualization to VMware is a big deal.” – LE, Public Sector
  • “We plan to consolidate to fewer systems and continue to migrate toward storage virtualization. Putting things behind SVC.” – LE, Telecom/Technology
  • “For block storage, it’s gonna be Hitachi Data Systems. ‘Cause with the virtualization and dynamic pooling I think they are leaders in the field. Everybody else has to follow what they’ve been doing since 2008. EMC and NetApp, several of ‘em have started adopting the pooling technology, but HDS is far superior.” – LE, Telecom/Technology
  • “We want to look at the EMC product – VPLEX.” – LE, Financial Services
  • “Awaiting EMC ViPR.” – LE, Telecom/Technology
  • “HDS VSP [for storage virtualization] in terms of performance, reliability and storage utilization. Storage virtualization to virtualize third-party vendors.” – LE, Financial Services

Microsoft has highest upside potential among discovery and inventory management vendors

October 16th, 2014 by AnnMarie Jordan

Nikolay Yamakawa, Analyst for TheInfoPro

The sprawl of virtual machines has pushed discovery and inventory management software to the center stage for many enterprise IT professionals – a trend on which Microsoft is well positioned to capitalize. The software automates tracking and management of assets and changes in the enterprise IT ecosystem, which is going through unprecedented transformation. The majority of large and midsize enterprises have now reached sufficient levels of virtualization, creating a need to automate discovery and management of unused VMs that can consume scarce resources, increase overhead, and open security holes. The technology ranked number two on our Servers and Virtualization Heat Index, which measures user demand, indicating considerable upside opportunity, with Microsoft being the main beneficiary.

Microsoft gathered the most citations for future project plans in discovery and inventory management software, with almost half of all selections for near-term projects. Before he took over the helm, Satya Nadella’s ship was also able to increase the number of cited in-use cases between the studies, although the gap between Microsoft and its closest runner-up diminished compared to the prior study. HP is now in second place, with BMC Software dropping to third. That helped HP take a sole second place in the latest study and move closer to Microsoft for implementations. About 60% of large and midsize enterprises are using the technology today, and an additional 18% report future project plans, indicating that there is room for change on the vendor landscape as enterprises ramp up their virtualization and automation efforts. Spending intentions are positive for discovery and inventory management software in 2014, with 19% of enterprises planning to increase spending in 2014 vs. just 4% having budget cuts.

As discovery and management software climbs the ladder of enterprise proliferation, we will continue to monitor vendor landscape in our Wave 14 Servers and Virtualization Study. The study will be published around the end of this year and will show whether Microsoft takes advantage of citations for future project plans.

TIP-Thurs-Serv-100214-pic

Our respondents had the following comments about discovery and inventory management:

  • “Looking into this – we have servers we lose track of.” – LE, Education
  • “Pilot this year is our first foray into this area, so this is brand new spending.” – LE, Telecom/Technology
  • “[Pain point:] Discovery. We need consistent reports.” – LE, Financial Services
  • “VMware doesn’t help if you have physical servers, which is why we have Zabbix as well.” – LE, Materials/Chemicals

Slow shift among vendors in static application security tools

October 14th, 2014 by AnnMarie Jordan

Daniel Kennedy, Research Director for Information Security

In the previous Information Security Study, 53% of security managers said they had controls built in at different stages of the software development lifecycle, and for many of these a key control is analysis of in-house produced source code or compiled binaries looking for potential security weaknesses (aka static application security tools, or SAST). Fortify and Ounce Labs were early competitors in this space that gained some enterprise penetration before acquisition by HP and IBM respectively, but as the 2014 Information Security Study revealed, newer players have made significant gains in an increasingly constrictive enterprise market.

TIP-Thurs-Infosec-100214-pic

In 2011, fresh off of the aforementioned acquisitions, HP (followed by IBM) was the most significant option for code scanning tools, at that point present in a quarter of enterprises. Fast forward to 2014: HP retains its top spot, but now a vendor that showed up in 1% of 2011 responses is at number two with 6%: Veracode. Offering binary analysis ‘on demand’ as opposed to the software installations required of the larger IT provider’s offerings, Veracode has experienced a slow rise against the slow erosion of market share for HP and IBM. IBM dropped to third in 2014, followed closely by WhiteHat Security and Checkmarx.

Quotes from security managers from the latest Information Security Study around vulnerability scanning of their source code or binaries included the following:

  • “Rough, because we have a lot of code languages. Gotta find a vendor that can keep up with our changes. We were close to Veracode, but we made changes and they couldn’t handle that.” – LE, Telecom/Technology
  • “It’s not seen as something that will yield benefit. No ROI seen here.” – LE, Consumer Goods/Retail
  • “We use third parties, but are considering WhiteHat for both code analysis and Web app scanning.” – LE, Services: Business/Accounting/Engineering
  • “I need it, I want to have it, make sure it does what I want it to do. I have budget for it, and I have yet to figure out who actually does what I’m asking. My definition of that may be different than what vendors are doing.” – LE, Industrial/Manufacturing

ServiceNow snatches sole lead in enterprise CMDB technology, but uncertainty escalates

October 10th, 2014 by AnnMarie Jordan

Nikolay Yamakawa, Analyst for TheInfoPro

ServiceNow moved to the head of the table in enterprise configuration management database (CMDB) technology in 1H 2014, but not all is bright and shiny. BMC filed a patent-infringement lawsuit this week against ServiceNow, adding to the one filed by HP at the beginning of the year. ServiceNow has more than doubled large and midsized enterprise selections for CMDB implementations between our studies.

The vendor has moved from about 5% of selections in 1H 2013 to 12% in 1H 2014, leaping ahead of HP, SolarWinds, Cisco, BMC Software and homegrown alternatives for in-use cases. The spike in adoption should not be surprising – our previous Networking Thursday TIP on CMDB indicated that ServiceNow gathered most selections for future projects, while some of the other vendors fell behind. However, the outstanding lawsuits add uncertainty about the vendor’s ability to maintain current growth trajectory going forward.

CMDB can be defined as a repository of information about networked devices in enterprise architecture and their interconnected relationships – a technology with the highest future-adoption potential in the network management category. The recently completed Networking Wave 11 study shows that about 13% of enterprises have in-pilot/evaluation, near-term, long-term and past long-term project plans, on top of 59% that are already using the technology.

Spending intentions remain positive for CMDB, with 19% planning to increase budget allocations in 2014 and 18% in 2015, as opposed to just 1% with spending cuts this year, and none for next year. Although CMDB displayed a positive upside potential last year as well, the technology has not experienced a significant uptick in new implementations between the studies. The narratives from decision makers explain that ServiceNow’s rise in year-over-year implementations can be partially attributed to enterprises switching away from using multiple other tools, in favor of one tool.

A growth trajectory in future enterprise adoption could now be harder to maintain for ServiceNow, since the two lawsuits may not go unnoticed by potential customers. Furthermore, the selections for future project plans are spread more equally among runners-up in 1H 2014 than they were back in 1H 2013.

TIP-Thurs-Net-092514-pic

Network managers had the following comments about ServiceNow CMDB at their enterprises:

  • “Working on that now. Just because we had several different tools doing the job that this one vendor [ServiceNow] could provide everything all together. Just an enhancement of the service that we can provide.” – LE, Materials/Chemicals
  • “Service Now replaces four nonintegrated systems for 2015 total rollout.” – LE, Consumer Goods/Retail
  • “In the cloud for ServiceNow.” – LE, Transportation

For public cloud vendors, the road to enterprise growth is a steep and rugged path

October 8th, 2014 by AnnMarie Jordan

Peter ffoulkes, Research Director for Servers and Cloud Computing

While cloud in some form or other is generally accepted to be the way of the future for the delivery of the majority of IT services, the transition from traditional IT architectures to a cloud-based future is a complicated one that will be executed over multiple years. We asked respondents how their workloads were distributed across the different execution venues in 2014, and what they expected that distribution to be like two years from now. In 2014, just 15% of workloads were deployed in private cloud environments of some variety, with an overwhelming majority of those private cloud workloads being of the internal, on-premises private cloud variety. By 2016 this number is expected to rise to a total of 30% of workloads, with 23% being internal, on-premises. By contrast, just 9% of workloads were deployed in public clouds (both software and infrastructure services) in 2014, and this percentage is expected to rise to 18% by 2016.

The public cloud provider challenge

There is no doubt that the leading Web-scale infrastructure cloud providers have built extremely efficient datacenter architectures and are capable of delivering services at very competitive prices per VM from a technology perspective. However, to ensure business efficiency, it will also be important for them to balance the customer mix between small and large contracts to keep overhead costs under control, as well as to increase the percentage of enterprise workloads hosted. To gauge the extent of public cloud infrastructure deployments, we asked Wave 7 Cloud Study respondents how many virtual nodes (VM instances or equivalent) were purchased or maintained per month, averaged over a calendar year.

In line with the limited percentage of enterprise workloads currently deployed on public infrastructure services, nearly half (47%) deployed 250 or fewer VMs per month, with half of those (24%) in the 1-50 VM range.

TIP-Thurs-Cloud-092514-pic

To achieve their growth and large account penetration goals, Web-scale cloud providers have a steep and challenging climb ahead, with just 8% of enterprise workloads projected to be in public cloud infrastructure services two years from now and 56% of respondents believing that they can deliver cloud services at the same or lower cost internally than by using external providers.

Anecdotal commentary illustrates the range of challenges that public cloud infrastructure providers must address to increase their enterprise penetration:

  • “It’s neck and neck right now. When we do cost comparisons, it’s cheaper for us to do so internally if it’s a system that runs 24/7.” – LE, Telecom/Technology
  • “Significantly less expensive. It’s the financial model that the cloud services used, that charge by the hour. So if we keep a resource running 24/7 at Amazon compared to running internally, the cost is three times more.” – LE, Services: Business/Accounting/Engineering
  • “Much less expensive internally right now. But it’s more difficult to do the accounting as you move to external services because of the variety of licensing models.” – MSE, Consumer Goods/Retail
  • “AWS is there when you need it. The ramp-up is fabulous, and they are the leader in this space for the moment. Downside is their one-size-fits-all approach. It is impossible to negotiate any terms and contracts. This has limited our interaction with them.” – LE, Financial Services
  • “The product Google is selling is not really enterprise-ready, I don’t see a roadmap. They’re re-packaging the consumer service, and there are all kinds of things that don’t make sense from an enterprise point of view. Mostly being able to secure your own data. Value is not as good as we thought it would be. Strengths: Ease of access, that’s the big one, you can get it from anywhere. Weaknesses: Lack of enterprise functionality that would allow you to have better control over what your users are doing and the security around your data, that sort of thing.” – MSE, Financial Services
  • “Strengths: I think Microsoft’s cloud may be quite mature. And also have a lot of features. Weaknesses: Pricing mechanisms. And the technical support. Pricing might be too high. [Support?] Maybe sometimes they tend not to fix my problem or answer my question. Sometimes they don’t know either.” – LE, Telecom/Technology

Software-defined storage definitions evolve in the enterprise community

October 6th, 2014 by AnnMarie Jordan

Nikolay Yamakawa, Analyst for TheInfoPro

Software-defined storage (SDS) has a variety of definitions in the enterprise storage community, but there has been considerable change in evolution of the term between the first and second halves of 2014. In the first half of 2014, storage professionals had a hard time making sense of what SDS means, considering it to be more of a marketing term than anything else. The definitions of SDS provided in the second half of the year were more specific as our respondent community started forming their own definitions of what the term means.

At the end of the Wave 18 Storage Study, more than 66% of storage professionals provided some form of SDS definition, but roughly a third were still not sure about the meaning. Storage virtualization came out as the most common description of SDS, with 19% of selections, followed by something that is managed from a virtual layer with 18% and hardware abstraction with 14%. About 12% of respondents view SDS as a buzzword, and a further 16% have not defined the term or believe that it does not have any meaning, with 8% each. Some of the other less-common definitions included commodity hardware, automated provisioning and open source.

Last week, we started the interviews for our next Storage Study. If you’d like your voice to be heard again or know someone who would be interested in voicing their opinion on SDS and many other technologies, it is a good time to sign up. As always, we will share the results of the study with you at the end.

TIP-Thurs-Stor-091814-pic

The change in SDS definitions between the first and second quarters of 2014 shows the fast pace of change happening in the market today:

  • “[Q1 2014:] I think it is ‘marketing hype’ more than anything. It is really from a virtualization buzz.” – LE, Education
  • “[Q1 2014:] Some marketing guy decided to create some real marketing hooey!” – LE, Healthcare/Pharmaceuticals
  • “[Q1 2014:] It’s more of an open source, not a big vendor. More of an open source build your own.” – LE, Public Sector
  • “[Q2 2014:] Storage virtualization – pooling and abstracting of physical storage into more logical units.” – LE, Telecom/Technology
  • “[Q2 2014:] Ability to use rest-based API calls with federated ID management to configure, deploy and monitor storage in an automated workload.” – LE, Transportation
  • “[Q2 2014:] Utility-based storage available on demand.” – LE, Financial Services

The Intel ‘industry standard’ server evolves into the ‘software-defined infrastructure’

October 3rd, 2014 by AnnMarie Jordan

Peter ffoulkes, Research Director for Servers and Cloud Computing

A long time ago, in a datacenter far, far away, there were no x86-based servers. Incredulous as that may seem, and so last century, that state was not uncommon until the mid-1990s. With the advent of capable and scalable x86 processors such as Pentium Pro, Pentium II Xeon and following products, the world began to change. Two decades later, the x86 architecture, now more generally promoted as IA – Intel Architecture – dominates general-purpose server deployments whether traditional enterprise datacenters or hyperscale cloud platforms, including AWS, Google, Microsoft and many others, having created an ecosystem of top-tier platform suppliers including HP, Cisco, Dell, and IBM (soon to be Lenovo), and a second tier of notable suppliers, including Supermicro and others.

 

TIP-Thurs-Serv-091814-pic

IA – industry-standard architecture

So having won the marketing and market share war, why would Intel want to change the rules? True to former CEO (and employee number three) Andy Grove’s philosophy – ‘Only the Paranoid Survive’ – the company understands that the world is changing dramatically and that what made them successful for the last two or four decades may not be the formula for future success.

Under the leadership of Diane Bryant, Senior Vice President, General Manager, Data Center Group, the company’s server technology announcements have been transformed. The tech specs are there if you need them, but the delivered business value leads front and center. This is not about IA, just for computation, but across the spectrum of datacenter architecture, compute, network, storage and software, a design center for a truly industry standard datacenter architecture – proprietary, maybe, but demonstrably effective.

At the announcement event for the latest generation of 22nm IA processors, the new Intel Xeon E5-2600 v3 (OK – the iPhone 6 is easier to digest, although I still don’t have a clue why I need one), a key focus was real world performance improvements on various aspects of a life sciences genetics (read: big data analysis) workload presented by Ari Berman of BioTeam. Comparing a genetics analysis with a four-year-old comparable Intel-based system, a real world comparison that I personally find credible, there was a 70x improvement on I/O, 20x on storage (SSD vs spinning disk), 2.7x on CPU, and overall 12x on total workflow. An order of magnitude on delivered results is meaningful. This is not just “mad scientist HPC,” it lies at the heart and soul of business analytics, the future of an information-based culture and economy. Ours!

So this was a well-staged announcement (kudos to the marketing team), but it seems worthy of attention to me. With a switch to the focus of IA as a comprehensive, multi-technology/discipline datacenter silicon architecture, it looks as though Intel is on to something in terms of future datacenter architectures. So far, the alternative future datacenter infrastructure visions that I have seen are lacking in Power, and there’s no ARM in them yet. Of course, it is now up to Intel to deliver convincingly on the vision…

Anecdotal commentary by TheInfoPro’s respondent community sheds light upon the attitudes held toward changing technology trends:

  • “Open Compute, not current generation, but where open rack and some of the technology Intel is working on changing how computing infrastructure goes. Moving everything to high-speed interconnects to have a tray of CPUs and connects and memory.” – LE, Financial Services
  • “Cisco really offers some great ease of management as well as terrific brand recognition. In the server market, everyone re-brands Intel. I just want a server that is configured the way I want it. It all boils down to relationship, quality and deliver ability. Cisco came through with great bundling. This will cut our switch and support costs.” – LE, Industrial/Manufacturing
  • “Dell has good brand and recognition. They stay inline with the Intel CPU roadmap. They maintain a business focus even though they sell retail computers.” – LE, Financial Services
  • “It’s not their vision, but Intel’s vision that catches our eye. Reliability and TCO is pretty solid compared to competitors.” – MSE, Energy/Utilities
  • “On the server side, IBM still has name recognition, but it has become more perception. I think that at some point in the future, IBM will be out of the x86 market. It really is an Intel product that IBM slaps their name on.” – LE, Telecom/Technology
  • “Adoption of the Intel virtual environment.” – LE, Financial Services

Multi-year shift in DAST approaches an apex

October 1st, 2014 by AnnMarie Jordan

Daniel Kennedy, Research Director for Information Security

A perceptible multi-year shift among vendors offering a Web application security testing solution, sometimes referred to as dynamic application security testing, or DAST, has occurred year-over-year in our quantitative information security studies. This shift suggests that in the absence of acquisition, smaller pure-play WhiteHat Security is on a trajectory to overtake much larger competitors for enterprise usage. IBM and HP as recently as the 2013 study led the list of DAST vendors via their prior acquisitions of Watchfire and SPI Dynamics respectively, part of an application security strategy that also included code security analysis acquisitions.

TIP-Thurs-Infosec-091814-pic

In 2011 IBM captured nearly 6% of responses for being ‘in use’ among interviewees’ enterprises, followed by HP at 4%. Cenzic (more recently acquired by Trustwave) followed at a more distant third, with around 1%. Fast-forward to 2014, and Qualys is now the most-cited vendor in the space as traditional vulnerability assessment providers further invade the application security space. HP sits at 6%, as does IBM. WhiteHat Security, which first showed up in the study in 2012, is at 5% ‘in use,’ with a chance to grow 2 percentage points over the next year and a half based on the reported plans of information security managers.

Quotes from security managers using WhiteHat Security from the latest Information Security Study included the following:

  • “WhiteHat has the ability to execute and the quality of the service they provide. Weakness is market penetration and source code analysis and being late to the game. I would like to see them more strategic into the overall security environment.” – LE, Financial Services
  • “Their [WhiteHat Security's] product implementation had a few hiccups; we’re still struggling to implement it.” – LE, Consumer Goods/Retail
  • “It [WhiteHat Security] works as advertised, does exactly what they say it will do. Tech support is weak; there is lack of availability. Getting the human can be a real challenge.” – MSE, Financial Services
  • “We’re using WhiteHat for a few apps and will expand usage. They’re the leader in their space.” – LE, Other