87 posts categorized "Market Segment"

02/04/2014

Hardware OEMs Need Not Fear Software Abstraction

Networking has never experienced innovation that would so dramatically benefit the work of software engineers and developers worldwide, until recently. Software Defined Networking (SDN) and Network Function Virtualization (NFV) are at the center stage of a technological revolution. However, how will these relatively new technologies impact traditional embedded hardware markets? Though SDN and NFV present substantial benefits to networking infrastructures, VDC Research believes SDN/NFV technologies will have only a marginal impact on embedded hardware markets through 2018.

Mobile network carriers and internet service providers now have an answer to further optimizing and improving upon their legacy infrastructures; a blessing disguised as a virtual and mobile network capable of supporting virtual devices disconnected from physical design. SDN and NFV are extremely similar in that both optimize hardware resources via software implementations. More specifically, SDN enables users to program network layers separating the data plane from the control plane. NFV enables agile distribution of networking resources when and where they are needed. With this elevated level of programmability, users can more easily optimize network resources. Service and network providers now have the ability to increase network agility, service quality and  time-to-market, and guarantee a more dynamic, service-driven virtual network.

Because NFV is more focused on the distribution of networking resources for when and where they are needed, the technology’s principal supporters are telecom service providers such as AT&T, BT Group, Deutsche Telekom, Orange, Telecom Italia, Telefóno, and Verizon. On the other hand, SDN focuses more on enterprise networking and the associated embedded hardware. Software solution providers such as Big Switch Networks, Cisco, Dell, HP, IBM, Juniper, and VMware all offer SDN controllers.

Growth in cloud-based solutions have led more and more companies to rely on SDN and NFV as major components to their technological infrastructure. Through abstracting network hardware with either NFV or SDN, companies are able to take advantage of changing dynamics in the market or application requirements – as opposed to being exposed by them. For example, IT administrators in the past would have to rely on costly ASICs to control network traffic flow. With SDNs, the administrator can manage data traffic in a very effective manner using legacy or other hardware.

NFV is a very straightforward and carrier-driven technology that continues to be increasingly popular in telecom applications. However, the deployment of SDN is not as straightforward. The main issue with SDN is how new the technology is. We recognize that SDN and its relative infancy are problematic for those lobbying to IT budget holders. IT operations are the backbone of many critical business operations, though, and it is important that suppliers reinforce the operational and fiscal advantages of SDN technology.

Not only are SDN and NFV still in their infant stage, but early adopters will be potentially more vulnerable to higher deployment costs, lack of availability of trained people, and insufficient support in open source software (which will become more important as open source software continues expanding in embedded applications and technologies like SDN/NFV). However, SDN and NFV have the potential to greatly increase the effectiveness and manageability of networking functions when fully developed. Despite the vast benefits enabled through SDN/NFV technologies, they will only marginally impact the embedded hardware market through the next several years.

by Conor Peal, Research Associate

12/18/2013

Real-time Analysis Accelerating Hadoop Market Opportunities

In the big data environment, Hadoop is the fast-growing open source batch processing system emerging as the popular choice for companies handling large volumes of multi-structured data. Hadoop excels with its distributed processing framework, and is primarily used for extracting, storing, and processing both structured and unstructured data. However, Hadoop’s downfall is its inability to produce real-time analysis. Typically, once Hadoop has processed the data, the data is then moved into SQL-based environments for real-time analysis. The major disadvantage of this system architecture is the increased costs when data is pushed to SQL-based engines. Hadoop offers full scalability, and if real-time analysis capabilities were integrated, it would become one of the most efficient and cost-effective solutions available to customers. As a result, real-time analysis capabilities in Hadoop represent an increasingly significant opportunity that few vendors currently offer.

If big data technology vendors are able to deliver a real-time query option within Hadoop, it would completely eliminate the need for data to be moved into SQL based environments. Last spring, Cloudera released the Impala query engine, which scales its open source massively parallel processing (MPP) solution across Hadoop, allowing customers to run SQL queries on data stored in Hadoop. This eliminates the need for multiple platforms or sets of data, as everything is done on the same Hadoop database. EMC followed with HAWQ, its SQL querying engine which scales its Greenplum MPP solution across Hadoop.  Twitter recently released Summingbird, which merges Hadoop and Storm, Twitter’s open source distributed real-time computation system, in one open source system. This hybrid system combines the best of both worlds – Hadoop processes the large volumes of data while Storm handles the real-time analytics.

The need for these solutions is becoming more apparent, and with the first few solutions already available, others are not far behind. One issue is that most of the current solutions are still slower than queries against relational databases. If a solution becomes available that can run Hadoop queries as quickly as queries against relational databases, the impact on the big data analytical landscape will be significant.

By Sarah Forman

Research Assistant, M2M & Embedded Technology

12/05/2013

The 64-bit Processor Bandwagon

Shortly after Apple announced its iPhone 5s would feature a 64-bit processor, Samsung quickly followed that it was also developing a 64-bit processor for mobile devices. Other major companies have since announced their intentions to release 64-bit mobile processors in the near future too; reports rumor that Broadcom, NVIDIA, and Qualcomm could unveil their new processors as early as January at CES 2014. This led us to ask, how much real value does a 64-bit processor currently bring to mobile products?

32-bit processors can only utilize a maximum 4GB of RAM. With access to 264 memory locations, 64-bit can process significantly more data. However, memory requirements in current mobile devices are far from reaching this 4 GB ceiling. For example, the iPhone 5s only features 1 GB of memory, in terms of RAM, and the Samsung Galaxy Note 3 is equipped with 3 GB. Clearly, memory is not a major concern in the development of mobile devices in comparison to other factors and would not drive the need for 64-bit processors in this arena.

Not only can the 64-bit architecture process larger quantities of data, but more memory locations increase data processing velocity. However, while hardware capabilities are improved, software is still a step behind. Lagging software is forcing processor OEMs to develop 64-bit processors with backwards compatibility for 32-bit software environments, as seen with Apple’s A7 chip and upcoming 64-bit processors based on the ARMv8 architecture. Yet, since 32-bit programming platforms are not optimized for 64-bit processors, devices cannot take full advantage of the amplified volume of memory locations and run with only marginal performance gains. Certain applications may even run slower since they are working in a sub-optimal 32-bit environment.

Currently, it seems most companies are developing 64-bit processors to remain competitive with Apple’s A7, despite a lack of drastic improvements to performance. Consumer perception and marketing drive current development so companies can have a 64-bit processor in their specifications. Yet in the longer run, the 32-bit to 64-bit shift could alter mobile devices’ role to the average consumer. Mobile devices may pose as an increasingly viable and competitive alternative to PCs, able to process data at comparable speeds. Such a transition is still years away, though, as even the transition from 16-bit to 32-bit for PCs took a around 10 years.

As companies rush to release their own 64-bit mobile processors, much investment in R&D is spent inefficiently on creating working products and not necessarily optimizing embedded hardware to its full potential. With branding as the driver for development, the current use of 64-bit processors in mobile devices remains more of a marketing gimmick than technological advancement and will likely remain so over the next couple years.

By Howard Wei

Research Assistant, M2M & Embedded Technology

12/02/2013

Can Intel Manufacture Architectural Gains?

Recent announcements stemming from Intel’s annual analyst day have provided ample insight into the company’s embedded strategy to maintain and grow the x86 architecture. While Intel’s plans to quadruple tablet processor shipments next year would put a dent into ARM’s share in that arena, the company’s expanded custom foundry business offers significantly more long-term potential. Intel will be providing access to Intel Architecture that can be added to customers’ IP cores in system-on-chip devices. The company will also allow customers to select their level of engagement, from design and test services to purely manufacturing. Though many details on Intel’s refocused foundry business are scant, we believe it has the potential to be a prominent contributor to x86 growth over the next several years.

Intel’s expanded foundry services greatly support x86 in a number of ways:

  • First, Intel retains the control to pick and choose exactly what they will fabricate. Intel isn’t going to cannibalize sales of their own processors through manufacturing SoCs for fabless competitors in the embedded markets that it plays in. The company can also equally regulate the integration of foreign IP. 
  • Second, expanded customization services will help Intel’s new Quark SoCs in penetrating new and traditional embedded markets. Intel will surely discount customization services of its homegrown productsbecause of the higher margins captured from semiconductor design. Such pricing reductions are the premise for Intel’s planned expansion into tablet devices.
  • Third, Intel’s foundry ecosystem is growing to accommodate increasing flexibility in development platforms. In addition to Intel EDA tools, customers will be able to leverage software development tools from Cadence, Synopsys, and Open-Silicon.

By expanding fabrication services, Intel stands to grow its architectural share at the expense of dedicated foundries. The expansion also enables Intel to become more proactive to market shifts. Though specific details on Intel’s new semi-customization services remain scarce, we believe the growing foundry business will help grow x86 in embedded markets over the long-term.

10/16/2013

Winning Big With M2M – Pamplona Capital Management Increases its Stake

Yesterday, Mac-Grey was acquired by CSC ServiceWorks for a 41% premium over Monday’s share price. This is interesting to us because of the M2M element. The newly combined entity represents a perfect example of what can happen when things coin-operated or otherwise are connected to the cloud.

 Mac-Grey provides M2M enabled commercial laundry products and services. For example, if you owned an apartment building and wished to provide an on-premises laundry room, you might contract with Mac-Grey to provide that service. Mac-Grey would install and manage the washers and dryers and you, as the building owner, could collect a revenue stream without the hassle of owning and maintaining the machines yourself.

The M2M element comes in because Mac-Grey was extremely successful in embedding M2M connectivity into the laundry machines it owns, sells and/or supports. Because of M2M, users can see if machines are available at their facility and receive alerts via a mobile app when their laundry is done. Multiple payment options are now more practical and the business owner and users have the appropriate visibility on machine usage patterns. Service is automatically summoned at the first sign of trouble and Mac-Grey can efficiently assign its staff as needed.

Pamplona Capital is quietly building an M2M-enabled portfolio of companies. CSC ServiceWorks was formed in May, 2013 through the acquisition of CoinMach which was also providing laundry services and Air-Serv, which provides the tire-inflation and vacuum kiosks that you often see at convenience stores and gas stations. In June, the portfolio was increased by the acquisition of Sparkle, a Canadian laundry services firm. Now the portfolio, held by the Pamplona Capital Management, includes Mac-Grey. One would think that leveraging the Mac-Grey M2M expertise along with elements from all the other entities should create an even stronger M2M-enabled set of products and a streamlined service organization to support them.

I imagine that Pamplona's name has something to do with the running of bulls which would be a good thing in financial markets but dangerous otherwise. In this case, running with M2M is a pretty safe bet. 

10/09/2013

For Microprocessor Vendors, The Enterprise Is Not the Future According to New Research from VDC Research

VDC is pleased to announce the publication of its annual outlook for the global market for embedded CPUs, MCUs and FPGAs.  This research is an invaluable strategic and tactical planning tool for chip, tool, and board vendors. 

Hightlights include:

  • We expect ARM to continue to take CPU market share from Intel in the years to come, though Intel will succeed in defending its position in high-performance applications.
  • Xilinx will cede FPGA market share to growing competitors Altera, Lattice Semiconductor, and Microsemi – who all stand to benefit from strong global demand for communications equipment, and OEMs’ continued migration away from ASICs.
  • The MCU market will grow rapidly, with the automotive sector representing an increasingly large share of the market.
  • Heterogeneous computing will drive big changes in the markets for all discrete processing technologies. As integrated architectures are used to consolidate functionality and boost processor efficiency, traditional vendors will need to deploy new business strategies to drive growth and margins.
  • The importance of tools when selecting a chip will drive additional M&A activity, as large chip vendors swallow smaller tool providers.  Potentially attractive acquisition targets include DDC-I, IAR, or Lauterbach.
  • The market for these embedded processing technologies (CPUs, MCUs and FPGAs combined) will grow to over $US 40 billion by 2017, at a compound annual growth rate of 6.9% overall.

To learn more download the executive brief now or see our recent press release here.

Embedded Processor Revenues (2012-2017)

13_EHW_i2_T5t1_CPUs Post-Pub Exhibit

Intel stacks more chips onto its existing M2M wager

The wagers being played in the high-stakes M2M game are definitely getting higher. As we noted last week, Eurotech, who has long being seen as a leader in edge gateways and M2M solutions, divested its Parvus asset in order to place increased focus on its core markets.

Yesterday, Intel announced a commitment to deliver intelligent gateway solutions in early 2014, offering a combined platform of Intel, Wind River and McAfee products.   These new, pre-validated products will allow embedded computer suppliers and their customers to more easily support M2M deployments targeted at legacy equipment as well as new products. The Intel M2M gateway solutions will be scalable from small deployments of a few supported things using Intel’s new Quark or Atom processors to large-scale deployments driven by higher powered Xeon products.

Why are companies such as Intel and Eurotech so focused on gateways? VDC’s previous and continuing coverage of scalable edge computing products has the answer. Unlike many markets, there is and will continue to be solid growth in all regions as the Internet of Things (IoT) and M2M become increasingly vital for saving operational costs and providing new revenue streams.

Edge3

In 2014, VDC will again be reporting on intelligent edge node gateways. We look forward to your company’s participation as we shape the report scope and conduct the required research. For more information contact us at dlaing@vdcresearch.com




10/01/2013

Curtiss-Wright is Betting Big on SWAP

In today's earlier blog covering Eurotech’s big bet on M2M, we mainly discussed the Eurotech side of the deal. In this follow-up we will provide VDC's View of the Curtiss-Wright side as we believe they have a shrewd strategy in play. In Parvus, Curtiss-Wright receives a complementary set of mobile computing and networking products that will allow diversification beyond their core Mil/Aero markets into markets such as industrial where ruggedized products with superior Size, Weight, and Power (SWAP) attributes will be increasingly valued. With ~$20M or more in 2012 revenues and an EBITDA of ~25% Parvus is definitely not a white elephant.

In summary, we see the basic business principles behind the Eurotech / Curtiss-Wright deal as being similar to those behind the 2012 blockbuster deal between the Boston Red Sox and Los Angeles Dodgers. The Dodgers acquiring  valuable players complementary to their target market and the Red Sox gaining liquidity to pursue a different team building strategy. Given that 2013 finds both the Red Sox and Dodgers well positioned to succeed in the playoffs, one can hope that Eurotech and Curtiss-Wright will also succeed in their pursuits in 2013 and beyond.

Eurotech is Betting "All-In" on M2M and the Internet of Things

Today’s announcement of the Eurotech sale of its Parvus division to Curtiss-Wright represents an all-in bet by Eurotech on the growing Internet of Things (IoT) and M2M market in general.  The approximate $38M in capital will allow Eurotech to further develop and support its portfolio of the High Performance Embedded Computing (HPEC), and “green” highly-efficient super computing products required at the edge of device networks as well as the cloud resources that support them.

In the next 3-years we can expect that strategies and actions such as those undertaken by Eurotech will be the rule rather than the exception. One limiting factor for rapid M2M adoption is that the billions of things that will be connected communicate over a myriad number of protocols, frequencies, and methods with diverse data structures.  At the same time, there are often strict security requirements to be followed. In these cases, gateway devices such as Eurotech’s Reliagate platform perform the necessary conversion, aggregation, and security functions.  Eurotech is not alone with their increased focus on M2M gateway platforms as they collaborate with IBM in support of the MQTT protocol used in gateway middleware deployments. Competition is increasing for edge computing software and hardware however. At Oracle’s Open World event last week, Freescale and Oracle announced a One Box platform collaborative product that includes Freescale processors and Java SE and Java Embedded Suite.  The OneBox platform is directly target at M2M gateway applications in consumer/home, medical, mobile, and industrial markets.

09/30/2013

ANOTHER OPPORTUNITY FOR MEMS?

Almost fourteen years ago, in December of 1999, the Worcester (MA) Fire Department lost six brave men in a fire in an abandoned cold storage warehouse. Two of the men had gotten lost; the other four perished in a valiant effort to rescue them. This is particularly relevant to VDC and to me; the firm is located only 23 miles from the disaster site and, in a “former life,” I worked for a company that manufactured personal lighting equipment for mining and other hazardous occupations, including the Fire Service. The Worcester FD was one of our customers.

At the time, there was very little in the way of personal safety equipment available to firefighters. True location systems were large, clumsy, inaccurate and very expensive. “State of the Art” comprised Personal Alert Safety Systems, or PASS, devices. These units, which could be integrated into a firefighters’ breathing apparatus, personal lights, or attached to turnout gear as discrete devices, comprised a small battery back coupled with a loud piezoelectric alarm. Mercury switches were used to sense movement (or, more properly, a lack of movement) to activate the alarm, which could also be activated by hand. Worcester’s firefighters were equipped with these. However, they were not adequate to prevent the tragedy. The noise level inside the fire and the labyrinthine construction of the building made them ineffective (see http://www.usfa.fema.gov/downloads/pdf/publications/tr-134.pdf, “Lessons Learned,” nos. 9 & 10). But even now, in 2013, no reliable, accurate and cost-effective system exists.

The Department of Homeland Security and WPI (Worcester Polytechnic Institute) have each developed systems that show some promise. DHS’s entry is called GLANSER (Geospatial Location Accountability and Navigation System for Emergency Responders). One WPI system is called the PPL (Precision Personnel Locator). Both utilize MEMS-based gyroscopes and accelerometers to sense both motion and position in component devices called IMUs (Inertial Measurement Units). Ranging and internal position estimates would be sent via VLF (Very Low Frequency) pulsed transmissions to external receivers mounted on fire apparatus or incorporated into ladders. VLF signals between 170 and 200 kHz are more easily able to penetrate steel and concrete structures than are higher frequencies. ULF (Ultra Low Frequency) signals, in the range of 20 Hz, offer even better penetration; these are used to locate miners trapped underground and for communication with submarines. However, frequencies this low require large, cumbersome antenna systems and thus are not practical for use on the fireground.

The ranging signals and IMU internal position estimates would be combined by the receiving system using a synthetic aperture imaging algorithm.

Although these systems do show promise, there is still a long way to go. The IMUs have a tendency to drift, making location somewhat problematic. Cost is also an issue. But at least progress is being made, and MEMS suppliers can help by working with both DHS and WPI to improve the accuracy of the IMUs. We believe that development of viable systems should be a national priority, especially in view of today’s threats of terrorism.