Solution Provider Panel: Hyper-Converged Infrastructure Doesn’t Mean Throwing Out Existing IT Investments
by Joseph F. Kovar on December 6, 2016
Hyper-converged infrastructure has become a big opportunity for solution providers looking to help customers find an easy-to-deploy alternative to legacy IT infrastructures for an ever-widening range of applications.
That’s the message from a panel of solution providers who Monday told an audience of their peers at this week’s NexGen Cloud conference, hosted by CRN parent The Channel Company, that while customers are increasingly implementing hyper-converged infrastructure, the technology is no panacea to all of their IT ills…
Partners Cheer HPE Sales And Marketing Restructuring; Two Top Execs To Step Aside
by Steven Burke on June 27, 2016
Hewlett Packard Enterprise partners Monday applauded a major sales and marketing restructuring aimed at simplifying the organization and making the company more nimble and easier to partner with.
In the wake of the restructuring, CTO Martin Fink and Chief Customer Officer John Hinshaw are set to step aside at the end of year.
“To drive better sales execution while optimally serving our global customers, we are aligning our sales teams into a single global sales organization within our Enterprise Group,” said HPE CEO Meg Whitman in a blog post disclosing the changes.
Peter Ryan, currently senior vice president and managing director of the Enterprise Group’s EMEA Region, will oversee the new global sales organization, including indirect and regional sales.
HPE also centralized product marketing, e-commerce and customer advocacy into a single marketing organization under Chief Marketing and Communications Officer Henry Gomez. “By bringing these organizations together under [Gomez], we will simplify our processes, streamline our focus and create better career opportunities for our employees,” said Whitman in the blog post.
Fink, a 30-year HP veteran, will be retiring at the end of the year. Hinshaw, who joined HPE five years ago as executive vice president, technology and operations, also plans to leave his position at the end of the year, Whitman said in the blog post.
Finally, COO Chris Hsu will oversee the IT and cybersecurity teams as part of a move to “drive process improvement, enhance customer and partner experience and employee engagement,” said Whitman.
Whitman said the changes, which come on the heels of Palo Alto, Calif.-based HPE’s move to spin off and merge its $20 billion enterprise services business with systems integrator CSC, as the next step in creating a “stronger, more focused HPE.
“To that end we are taking steps to centralize and simplify our organization,” said Whitman. “These changes will make us easier to buy from and partner with.”
Partners said they see the changes as the next chapter in Whitman’s all-out effort to remake HPE as a more agile and nimble software-defined infrastructure leader, while larger competitors are in the midst of turmoil.
Dell, for example, is in the process of acquiring EMC-VMware in the largest IT acquisition in history, while Cisco Systems has had a number of high-profile executive departures since CEO Chuck Robbins took the helm last July and is undergoing a massive transformation into a software company.
“This definitely gives HPE a competitive advantage,” said Dan Molina, CTO of San Diego-based Nth Generation Computing, one of HPE’s top enterprise partners. “This is another huge proof point that HPE is streamlining and cutting layers of management, therefore making people more directly accountable. To me, that is a requirement for success in the future.”
Molina and other partners said they also see the restructuring as a stepped-up focus to continue to deliver sales growth – putting the entire global sales organization under Antonio Neri, executive vice president and general manager of the Enterprise Group.
In the most recent quarter, HPE’s Enterprise Group delivered 7 percent sales growth under Neri’s leadership. That helped drive HPE’s first year-over-year sales growth in five years.
“I like what I hear about consolidating the different sales groups under [the Enterprise Group],” said Molina. “In the past, one of our biggest concerns was the different groups operating in silos with different product specialists like software and storage. I think it makes a lot of sense to combine all of sales under the [Enterprise] Group. It is going to streamline operations and make it easier for partners and customers to deal with HPE. [Whitman] continues to streamline and provide more complete solutions leveraging the vast HPE portfolio.”
As part of the restructuring, HP Labs, including the highly touted bid to create a new breakthrough in computing called “The Machine,” will be placed under Neri as part of the Enterprise Group, a move that will “further accelerate the time it takes to drive technology from research and development to commercialization,” said Whitman.
Whitman said The Machine prototype, which is set to be delivered by the end of the year, remains on track. “This shows HPE is serious about getting a prototype of The Machine out by the end of the year,” said Molina, noting he was a huge fan of Fink. “This tells me they are starting to move something from R&D into a final product.”
Scott Douglas, senior vice president of CB Technologies Inc., Orange, Calif., one of HPE’s top enterprise partners, said he sees all of the changes accelerating sales for HPE partners.
“This is great for the channel,” he said. “This makes HPE more agile and nimble. From a channel perspective, they are putting their money where their mouth is. That’s a good thing for partners.”
The CEO of a top HPE partner, who did not want to be identified, said the sales restructuring makes the channel more valuable to customers that have difficulty navigating HPE. “When you have a restructuring like this it gets more customers to engage with solution providers,” he said. “Change is good, and we need to embrace it.”
Whitman, for her part, said, “We’re living in a world where continuous improvement is essential to long-term success. I’m excited about the future of HPE, and I’m confident that these changes will help us accelerate our strategy and continue to win in the marketplace.”
Cavium To Acquire QLogic, Create IP, Storage Networking Heavyweight
by Joseph F. Kovar on June 15, 2016
Cavium, a developer of networking semiconductors and solutions, is planning to acquire QLogic, a top provider of high-speed storage networking solutions.
San Jose, Calif.-based Cavium on Wednesday unveiled a definitive agreement to acquire Aliso Viejo, Calif.-based QLogic for $1.36 billion. That purchase price includes QLogic’s cash on hand of $355 million, giving the deal a total enterprise value of just over $1 billion.
The proposed purchase price included a premium of more than 14 percent over QLogic’s total market capitalization based on that company’s share prices at the end of the trading day Wednesday.
[Related: EMC World: Tucci Passes Torch To Dell]
Wall Street reaction to news of the proposed acquisition, which was revealed after the close of trade Wednesday, was strong. In after-hours trading about three and a half hours after the close of the market, Cavium’s share price fell over 8 percent, while QLogic’s share price rose nearly 13 percent.
Cavium expects QLogic’s intelligent server and storage connectivity solutions to complement Cavium’s networking, compute and security solutions and enable it to provide complete end-to-end offerings to enterprise, cloud, data center, storage, telco and networking customers and OEMs.
Cavium offers a series of networking and storage processors that complement QLogic’s Ethernet and converged networking interface cards as well as its Ethernet-based and Fibre Channel-based adaptors and controllers. According to Cavium, there is only a 10 percent revenue overlap between the two companies.
For Cavium, the acquisition also adds to its OEM and business customer base.
QLogic has of late been a strong force in the channel, said Dan Molina, chief technology officer at Nth Generation Computing, a San Diego-based solution provider and QLogic channel partner.
“QLogic has in the last few years been working more with us as other competitors got quieter,” Molina told CRN. “This is especially true after QLogic exited the storage switch market and partnered more closely with Brocade.”
QLogic and Brocade have been working very closely with Nth’s main storage vendor, Hewlett Packard Enterprise, Molina said. “We’ve been at a lot of HPE events where QLogic has been featured, and where it has been reminding everyone of the importance of the storage ‘plumbing,’ ” he said. “It’s a good message along with Brocade.”
Molina said he is not surprised that QLogic was able to get a good premium on its market capitalization given the number of customers it works with.
He also said he expects Cavium to continue to partner with Brocade. “Cavium seems to have a strong focus on Ethernet,” he said. “So it might still be a partner with Brocade on the Fibre Channel side.”
The acquisition of QLogic will help make Cavium a diversified pure-play infrastructure semiconductor leader, said Cavium President and CEO Syed Ali in a statement.
“QLogic’s industry leading products extend our market position in data center, cloud and storage markets, and further diversifies our revenue and customer base. In addition to the compelling strategic benefits, the manufacturing, sales and operating synergies will create significant value for our shareholders,” Ali said in the statement.
Christine King, executive chairman of QLogic, said in a statement that the combined $1 billion revenue of Cavium and QLogic will benefit their customers.
“The scale of operations of a nearly $1 billion revenue business will allow the combined company to deliver better solutions for customers and create more career opportunities for employees,” King said in the statement.
Spokespeople for Cavium and QLogic were unable to reply to requests for more information by publication time. However, Cavium said replays of a Wednesday conference call focused on the planned acquisition will be available on Cavium’s website.
The boards of directors of both Cavium and QLogic have already approved the acquisition, which is expected to close some time in the third quarter of this year.
HPE Adds Broadwell Processors, Persistent Memory To Gen9 ProLiant Servers
by Joseph F. Kovar on March 31, 2016
HPE Thursday updated its ProLiant Gen9 server portfolio with the introduction of Intel’s newest Broadwell processor as well as its new persistent memory technology, which allows the server’s memory to serve as a high-performance storage tier.
The latest versions of HPE’s Gen9, or ninth generation, ProLiant DL360 and DL380 servers also include new management, security and storage capabilities aimed at helping customers tie on-premise data center infrastructures to the cloud for running mission-critical applications, said Tom Lattin, HPE’s vice president of server options.
“This is an introduction for a new set of capabilities for our ProLiant Gen 9 servers,” Lattin told CRN. “We’re bringing in a new memory architecture, a new architecture for migrating solutions to the cloud, higher performance and higher security, in addition to our new persistent memory technology.”
The new servers come as HPE is really making its presence felt in the server market, said Mike Carter, president and founder of eGroup, a Mt. Pleasant, S.C.-based solution provider and HPE channel partner.
“In general, we’ve seen a heightened and pronounced presence from HPE in the last six months,” Carter said. “HPE had almost been invisible for a while as Cisco UCS and VCE took the center stage.”
The updated ProLiant DL360 and ProLiant DL380 servers are based on Intel’s new Xeon E5-2600 v4 processors, which were formally introduced Thursday by Intel at theIntel Solutions Summit, held this week in Orlando, Fla.
The new Xeon E5-2600 v4 processors, code-named Broadwell, give the new Gen9 ProLiants a significant boost in performance, Lattin said.
The servers also come with the first implementation ofHPE’s persistent memory technology.
Introduced Tuesday, persistent memory brings together standard DRAM along with NAND flash memory and a micro controller with an integrated battery on a module that fits in a standard memory slot, said Bret Gibbs, persistent memory product manager at HPE, Palo Alto, Calif.
“We’re looking to deliver the performance levels you see with DRAM, but in the realm of meeting storage requirements,” Gibbs told CRN.
Its first implementation, the NVDIMM, which is short for “non-volatile DIMM,” pairs 8 GB of DRAM for pure speed with 8 GB of NAND flash for persistence. Future versions will be available in different capacity points.
Because of the on-board DRAM, NVDIMM performance is the same as DRAM, HPE’s Gibbs said. However, when compared to SAS SSDs and PCIe-based storage, NVDIMM offers 24 times the IOPS and six times the bandwidth, with 73 times lower latency, he said.
NVDIMM is tied closely to the software used in servers, Gibbs said. “The operating system will see NVDIMM as block storage, as if it’s hard disk or SSD capacity,” he said. To get full performance, applications will need to be modified to address NVDIMM. Unmodified applications will see increased performance, but there will be a huge difference in applications that are modified.”
Microsoft plans to show how NVDIMM technology works with its applications, Gibbs said. HPE is also offering a software development kit for Linux developers to get their applications ready to work with NVDIMM, he said.
Initial target workloads for NVDIMM will be database applications, Gibbs said.
NVDIMM, which is based on industry standards, already has other implementations in the market, Gibbs said. “But this is the first to be designed for a specific server,” he said. “Our NVDIMM is tied to HPE’s Smart Storage Battery, which acts as the battery backup to the DIMMs.”
HPE 8-GB NVDIMM modules will be list-priced at $899. This compares with about $249 for a standard RDIMM module, HPE said.
The ProLiant DL360 and DL380 have proven to be real enterprise workhorses, and the new persistent memory will make them even more so, eGroup’s Carter said.
“The volatility of RAM has been an issue,” he said. “Persistent memory seems to be the right ticket for improving performance.”
NVDIMM will be key to running important data in memory and knowing that the data can’t be lost, said Dan Molina, chief technology officer at Nth Generation Computing, a San Diego-based solution provider and longtime HPE partner.
Customers with mission-critical applications such as Oracle and Microsoft SQL databases would like to use persistent memory to run those applications, Nth Generation’s Molina told CRN.
“If they are run in memory that acts like storage, they will run dramatically faster,” he said. “Customers could also fit part of an application like transactional logs in persistent memory, which would still provide important performance benefits.”
In addition to the Intel Xeon E5-2600 v4 processors and HPE’s persistent memory technology, the updated Gen9 ProLiant servers have a number of other memory advances, HPE’s Lattin said.
They now offer the option of DDR4 2400MT/s memory in addition to the previous top-performing 2100MHz memory, he said.
Customers can now also use memory modules with up to 128-GB capacity per module compared with the previous maximum of 64 GB per module. “Customers can now run an entire workload in memory,” he said. “Customers have already started doing so. But for applications which were capacity-bound, we eliminated the boundary by doubling the capacity of the DIMMs.”
HPE is also moving to Trusted Platform Module 2.0 with the updated ProLiant servers, Lattin said. This is the latest version of the TPM specification for a dedicated microprocessor that integrates cryptographic keys into hardware to decrease the risk of cyberattacks.
HPE Intros ‘Persistent Memory,’ Combining DRAM Speed With NAND Flash Persistence
by Joseph F. Kovar on March 29, 2016 8:48 pm EDT
Hewlett Packard Enterprise on Tuesday officially rolled out a new type of server memory it said combines the performance of DRAM with the persistence of traditional SSDs or spinning disk.
The new technology, dubbed persistent memory, is scheduled to be available starting in early April, initially as an option in new versions of HPE’s ProLiant Gen9 DL360 and DL380 servers — possibly featuring new Intel Broadwell processors — which the company is slated to introduce this Thursday.
The unveiling of persistent memory came via a meeting between HPE and a small group of journalists and analysts, including CRN.
With persistent memory, HPE is combining standard DRAM along with NAND flash memory and a micro controller with an integrated battery on a module that fits in a standard memory slot, said Tim Peters, HPE’s vice president and general manager for ProLiant rack servers, server software and core enterprise solutions.
In its first implementation, the NVDIMM, which is short for “non-volatile DIMM,” will pair 8 GBs of DRAM for pure speed with 8 GBs of NAND flash for persistence, Peters said. Future versions will be available in different capacity points, he said.
HPE 8-GB NVDIMM modules will be list priced at $899. This compares with about $249 for a standard RDIMM module, he said.
The NVDIMM modules will be first available with updatedGen9, or ninth-generation, ProLiant DL360 and DL380 servers slated to be unveiled Thursday, Peters said. The servers support up to 16 NVDIMMs per server.
HPE did not go into much detail about the updated ProLiant DL360 and ProLiant DL380 servers. But one of the presenters let slip that the new servers might include the new Intel Broadwell Xeon E5-2600 v4 processors. Peters said, however, that that is assuming that there is such a processor, after an HPE spokesperson noted that Intel does not like partners talking publicly about unreleased products.
Intel did not respond to a request for more information about the timing of the release of its new Broadwell processors by publication time.
NVDIMM is going to be an amazing new option for business-critical infrastructures, said Dan Molina, chief technology officer at Nth Generation Computing, a San Diego-based solution provider and longtime HPE partner.
Solution Providers Pumped as SD-WAN Market Set to Soar
by Mark Haranas on March 25, 2016 11:23 am EDT
Although software-defined networking is capturing the headlines this year, software-defined wide area networking is set for a massive compound annual growth rate of 90 percent over the next four years, according to a new report from research firm IDC.
By 2020, the SD-WAN market will be a $6 billion industry, up significantly from $225 million in 2015. Networking solution providers are poised to pounce on this opportunity to deliver SD-WAN and the professional services tied to the technology.
“Are we seeing an increase in SD-WAN sales? Absolutely yes,” said Dan Molina, CTO of Nth Generation Computing, a San Diego-based solution provider that partners with SD-WAN vendor Silver Peak. “More customers are seeking it out and more customers are being more open and receptive when we bring this modern SD-WAN option to them.”
Molina said customers are adopting SD-WAN at a blistering pace due in part to the cost savings that comes from them no longer having to depend on “expensive” MPLS circuits. SD-WAN leverages cost-effective broadband connections while adding a layer of intelligence to keep the network secured and optimized for “private-line like” performance, he said.
IDC forecasts that SD-WAN revenue will start to ramp up strongly in 2016 and 2017 across a broad range of verticals due to the rise of cloud computing and the need for simplified virtual private network capabilities and lower MPLS costs.
In a recent survey of U.S. enterprises, nearly half said they’re planning to consider migrating to SD-WAN over the next two years, according to IDC.
“We see a bright future for SD-WAN,” said Molina.
Customers who adopt SD-WAN, compared to traditional router-based WANs or hybrid WAN architectures, typically have multiple branch offices using Software-as-a-Service applications and unified communications and collaboration services.
IDC’s Rohit Mehra, vice president of Network Infrastructure, said SD-WANs leverage hybrid WANs, but also incorporate a centralized application-based policy controller, analytics for application and network visibility as well as a software overlay that abstracts underlying networks. He said it can be optimized to meet the requirements of cloud applications and services.
“As public and private cloud use continues to grow, WAN performance becomes critical,” said Mehra in an email to CRN. “As enterprises move business processes to the cloud, there is a greater need to fully integrate cloud-sourced services into WAN environments to ensure workload/application performance, availability and security.”
The vendor landscape is also rapidly evolving to meet the increasing demand. The space is becoming crowded with companies such as Silver Peak, Viptela, CloudGenix, Nuage Networks, Glue Networks, Talari Networks and VeloCloud all tussling to take market share.
SD-WAN startup VeloCloud recently raised $27 million in a funding round, bringing the total raised to nearly $50 million. The most recent round included funding from market competitor Cisco Systems, which also provides SD-WAN solutions. Application performance vendorRiverbed Technology recently acquired Germany-based Ocedo, who specializes in products designed for SD-WAN.
Vendors in this space are also starting to form strategic partnerships to enhance SD-WAN solutions. This week, Viptela unveiled a partnership with infrastructure management software vendor SevOne to provide unified network monitoring for SD-WANs by integrating platforms.
CISA Could Lead to Privacy Issues and Abuse, Security Channel Fears
by Joseph F. Kovar on October 28, 2015, 9:14 pm EDT
A new Senate bill that gives businesses that suffer cybersecurity breaches immunity from provisions barring the sharing of information is causing great concern among the IT security channel because of the potential for abuse.
The Cybersecurity Information Sharing Act of 2015, or CISA, passed Tuesday by the U.S. Senate, is aimed at promoting information sharing between the public and private sectors. The bill sets up a system for threat intelligence information sharing between the two sectors led by the director of national intelligence.
The bill would bypass privacy and antitrust laws that currently prevent the sharing of information after an attack. In theory, sharing such information could allow other businesses more time to put in place procedures to prevent a similar attack on their operations.
The federal government expects that businesses’ sharing data can help each other prevent multiple types of attacks, including cyber, terrorist and economic attacks. Under the bill, non-relevant information that could identify specific people could theoretically be stripped from shared threat intelligence, but could be used by whoever receives it for its own purposes if it is not removed.
CISA, which must now be reconciled with a similar bill passed earlier this year by the House of Representatives, has generated a lot of controversy in the IT industry by bringing hot-button issues around security and information sharing to the forefront.
Major tech giants, such as Apple, Google and Dropbox, that partner with solution providers have voiced their concerns about the bill, saying that it threatens information privacy.
Privacy advocates also warn that CISA will funnel data to the National Security Agency. A law forbidding the NSA from bulk collecting of U.S. call metadata just passed this past summer.
Meanwhile, supporters say that better information-sharing support between the public and private sectors will help facilitate better security for all involved.
While it is important that the government moves forward to combat cyberthreats, CISA may not be the best way to do so, said Jerry Craft, senior security consultant and chief information security officer at Nth Generation Computing, a San Diego-based solution provider.
Craft told CRN that he is a big fan of information sharing, but not when it comes to customer details.
“Sharing of personal information is something users have to handle themselves,” Craft said. “But I’m not sure how preventing cybersecurity attacks can work without sharing details. We need the details to show down an attack. But[former NSA employee Edward] Snowden showed there are some dark places where sharing can go.”
One place where more sharing is needed is getting information from the government, said Craft, who as a former CISO at a major bank dealt with government officials who seemed to want as much information as they could get without giving anything back.
“When we reached out to the FBI or the Secret Service, we did not get any information in return,” he said. “We saw information exchange as a one-way street. It was, ‘You tell us everything you know, and we’ll tell you nothing.’ The government should get together with a council of peers who can work together instead of a bill like CISA.”
Chris Kirschke, vice president of solutions, security and cloud at Bedrock Technology Partners, a San Diego-based solution provider, told CRN he looked at the act and was very disappointed.
“It’s an incomplete bill,” Kirschke said. “First of all, it’s missing clarity. A lot of terms in there are not defined, terms like ‘substantial manner’ or ‘substantial harm’ or what an ‘information system’ is comprised of. It’s a poor effort by the Senate to understand the threat landscape.”
A major issue with the bill is the lack of control over personal information once it is passed to a government agency, Kirschke said. “Once it’s deemed appropriate for cybersecurity purposes, there’s no limit on what someone can do with it,” he said.
The biggest issue is the immunity offered to companies who provide personal information to officials after a breach, Kirschke said.
“I work for a solution provider,” he said. “I can take my competitor’s network down and have immunity for it. If MasterCard gets pissed at Visa, they can threaten them. This could lead to a lot of playground fights. If I can justify my action as ‘good faith efforts,’ I can get away with it.”
CISA could encourage active collaboration on personal data with the government, Kirschke said.
“If [the Department of Homeland Security] comes to me and asks for information, I can provide it without taking the time to check into the background of how the data will be used because of immunity,” he said. “I get the need for sharing. But we need some kind of clearinghouse. This bill is not a step in the right direction.”
The Senators should have spent more time in the industry with companies that deal with security issues, Kirschke said.
“Why not put the appropriate data in the public domain and let companies deal with it responsibly?” he said. “Knowledge is power. If you have a public with knowledge, you have a knowledgeable public. We don’t need the NSA or the FBI controlling the information. If everyone has the information, they can make the right decisions.”
Joe Kadlec, vice president and senior partner at Consiliant Technologies, an Irvine, Calif.-based solution provider, called CISA a “double-edged sword” because of the good and the harm it could do.
Kadlec told CRN he is not happy about the idea of sharing personal information. “I’m all for sharing of information that leads to the arrest of cybercriminals,” he said. “But not about turning over private information. It’s not always relevant information.”
The double-edged sword is the fact that, if certain information can be shared with immunity, a company might just turn all information over, Kadlec said. That, he said, leads to concerns about who gets the information, how well it’s protected, and what the government will do with the information once it gets it.
“The majority of our customers, when it comes to their customers’ information, don’t want to end up on the cover of the Wall Street Journal because of a breach,” he said. “And CIOs are concerned about the potential for being personally liable for a breach.”
Sarah Kuranda contributed to this story.