82 posts categorized "Market Segment"

10/09/2013

For Microprocessor Vendors, The Enterprise Is Not the Future According to New Research from VDC Research

VDC is pleased to announce the publication of its annual outlook for the global market for embedded CPUs, MCUs and FPGAs.  This research is an invaluable strategic and tactical planning tool for chip, tool, and board vendors. 

Hightlights include:

  • We expect ARM to continue to take CPU market share from Intel in the years to come, though Intel will succeed in defending its position in high-performance applications.
  • Xilinx will cede FPGA market share to growing competitors Altera, Lattice Semiconductor, and Microsemi – who all stand to benefit from strong global demand for communications equipment, and OEMs’ continued migration away from ASICs.
  • The MCU market will grow rapidly, with the automotive sector representing an increasingly large share of the market.
  • Heterogeneous computing will drive big changes in the markets for all discrete processing technologies. As integrated architectures are used to consolidate functionality and boost processor efficiency, traditional vendors will need to deploy new business strategies to drive growth and margins.
  • The importance of tools when selecting a chip will drive additional M&A activity, as large chip vendors swallow smaller tool providers.  Potentially attractive acquisition targets include DDC-I, IAR, or Lauterbach.
  • The market for these embedded processing technologies (CPUs, MCUs and FPGAs combined) will grow to over $US 40 billion by 2017, at a compound annual growth rate of 6.9% overall.

To learn more download the executive brief now or see our recent press release here.

Embedded Processor Revenues (2012-2017)

13_EHW_i2_T5t1_CPUs Post-Pub Exhibit

Intel stacks more chips onto its existing M2M wager

The wagers being played in the high-stakes M2M game are definitely getting higher. As we noted last week, Eurotech, who has long being seen as a leader in edge gateways and M2M solutions, divested its Parvus asset in order to place increased focus on its core markets.

Yesterday, Intel announced a commitment to deliver intelligent gateway solutions in early 2014, offering a combined platform of Intel, Wind River and McAfee products.   These new, pre-validated products will allow embedded computer suppliers and their customers to more easily support M2M deployments targeted at legacy equipment as well as new products. The Intel M2M gateway solutions will be scalable from small deployments of a few supported things using Intel’s new Quark or Atom processors to large-scale deployments driven by higher powered Xeon products.

Why are companies such as Intel and Eurotech so focused on gateways? VDC’s previous and continuing coverage of scalable edge computing products has the answer. Unlike many markets, there is and will continue to be solid growth in all regions as the Internet of Things (IoT) and M2M become increasingly vital for saving operational costs and providing new revenue streams.

Edge3

In 2014, VDC will again be reporting on intelligent edge node gateways. We look forward to your company’s participation as we shape the report scope and conduct the required research. For more information contact us at dlaing@vdcresearch.com




10/01/2013

Curtiss-Wright is Betting Big on SWAP

In today's earlier blog covering Eurotech’s big bet on M2M, we mainly discussed the Eurotech side of the deal. In this follow-up we will provide VDC's View of the Curtiss-Wright side as we believe they have a shrewd strategy in play. In Parvus, Curtiss-Wright receives a complementary set of mobile computing and networking products that will allow diversification beyond their core Mil/Aero markets into markets such as industrial where ruggedized products with superior Size, Weight, and Power (SWAP) attributes will be increasingly valued. With ~$20M or more in 2012 revenues and an EBITDA of ~25% Parvus is definitely not a white elephant.

In summary, we see the basic business principles behind the Eurotech / Curtiss-Wright deal as being similar to those behind the 2012 blockbuster deal between the Boston Red Sox and Los Angeles Dodgers. The Dodgers acquiring  valuable players complementary to their target market and the Red Sox gaining liquidity to pursue a different team building strategy. Given that 2013 finds both the Red Sox and Dodgers well positioned to succeed in the playoffs, one can hope that Eurotech and Curtiss-Wright will also succeed in their pursuits in 2013 and beyond.

Eurotech is Betting "All-In" on M2M and the Internet of Things

Today’s announcement of the Eurotech sale of its Parvus division to Curtiss-Wright represents an all-in bet by Eurotech on the growing Internet of Things (IoT) and M2M market in general.  The approximate $38M in capital will allow Eurotech to further develop and support its portfolio of the High Performance Embedded Computing (HPEC), and “green” highly-efficient super computing products required at the edge of device networks as well as the cloud resources that support them.

In the next 3-years we can expect that strategies and actions such as those undertaken by Eurotech will be the rule rather than the exception. One limiting factor for rapid M2M adoption is that the billions of things that will be connected communicate over a myriad number of protocols, frequencies, and methods with diverse data structures.  At the same time, there are often strict security requirements to be followed. In these cases, gateway devices such as Eurotech’s Reliagate platform perform the necessary conversion, aggregation, and security functions.  Eurotech is not alone with their increased focus on M2M gateway platforms as they collaborate with IBM in support of the MQTT protocol used in gateway middleware deployments. Competition is increasing for edge computing software and hardware however. At Oracle’s Open World event last week, Freescale and Oracle announced a One Box platform collaborative product that includes Freescale processors and Java SE and Java Embedded Suite.  The OneBox platform is directly target at M2M gateway applications in consumer/home, medical, mobile, and industrial markets.

09/30/2013

ANOTHER OPPORTUNITY FOR MEMS?

Almost fourteen years ago, in December of 1999, the Worcester (MA) Fire Department lost six brave men in a fire in an abandoned cold storage warehouse. Two of the men had gotten lost; the other four perished in a valiant effort to rescue them. This is particularly relevant to VDC and to me; the firm is located only 23 miles from the disaster site and, in a “former life,” I worked for a company that manufactured personal lighting equipment for mining and other hazardous occupations, including the Fire Service. The Worcester FD was one of our customers.

At the time, there was very little in the way of personal safety equipment available to firefighters. True location systems were large, clumsy, inaccurate and very expensive. “State of the Art” comprised Personal Alert Safety Systems, or PASS, devices. These units, which could be integrated into a firefighters’ breathing apparatus, personal lights, or attached to turnout gear as discrete devices, comprised a small battery back coupled with a loud piezoelectric alarm. Mercury switches were used to sense movement (or, more properly, a lack of movement) to activate the alarm, which could also be activated by hand. Worcester’s firefighters were equipped with these. However, they were not adequate to prevent the tragedy. The noise level inside the fire and the labyrinthine construction of the building made them ineffective (see http://www.usfa.fema.gov/downloads/pdf/publications/tr-134.pdf, “Lessons Learned,” nos. 9 & 10). But even now, in 2013, no reliable, accurate and cost-effective system exists.

The Department of Homeland Security and WPI (Worcester Polytechnic Institute) have each developed systems that show some promise. DHS’s entry is called GLANSER (Geospatial Location Accountability and Navigation System for Emergency Responders). One WPI system is called the PPL (Precision Personnel Locator). Both utilize MEMS-based gyroscopes and accelerometers to sense both motion and position in component devices called IMUs (Inertial Measurement Units). Ranging and internal position estimates would be sent via VLF (Very Low Frequency) pulsed transmissions to external receivers mounted on fire apparatus or incorporated into ladders. VLF signals between 170 and 200 kHz are more easily able to penetrate steel and concrete structures than are higher frequencies. ULF (Ultra Low Frequency) signals, in the range of 20 Hz, offer even better penetration; these are used to locate miners trapped underground and for communication with submarines. However, frequencies this low require large, cumbersome antenna systems and thus are not practical for use on the fireground.

The ranging signals and IMU internal position estimates would be combined by the receiving system using a synthetic aperture imaging algorithm.

Although these systems do show promise, there is still a long way to go. The IMUs have a tendency to drift, making location somewhat problematic. Cost is also an issue. But at least progress is being made, and MEMS suppliers can help by working with both DHS and WPI to improve the accuracy of the IMUs. We believe that development of viable systems should be a national priority, especially in view of today’s threats of terrorism.

The Great Divide: Growing Armies for High-end Server Applications

Several recent reports stemming from the keynote at least week’s JavaOne conference indicate that IBM is working towards enabling GPU acceleration with Java, one of the most popular programming languages used in software development. This shortly follows the establishment of the OpenPOWER Consortium, an open development alliance based on IBM’s Power architecture, in which an announcement was made that IBM and Nvidia will work together to integrate the Power and CUDA GPU ecosystems. At this point, its hard not to think about the competitive strategies unfolding of other industry leaders such as AMD, ARM, and Intel to capture the burgeoning server markets for high-performance computing (HPC) and big data applications. The end result will determine more than who carves out the most revenues, it will also decide the fate of several processor architectures and their extent in new enterprise applications.

First, from a software standpoint, IBM and Nvidia’s recent announcements will surely attract and retain a lot of developers. Not only is the collaboration enabling two pools of (typically deeply-invested) software developers to pursue more application opportunities, it will also add support for a common high-level language to which OEMs can easily acquire or build out expertise, Java. Nvidia is no newcomer to GPU acceleration in demanding big data applications, and teaming up with IBM to offer a more-holistic solution featuring more than its own proprietary CUDA language and GPU technology will improve integration with prospective businesses’ requirements and objectives.

On the other end, we have the splintering relationship between AMD and Intel. Traditionally an x86-only supplier, AMD will instead be also rolling out the first 64-bit ARM processors for datacenters in 2014, and will supply “processor-agnostic” SeaMicro servers and data center technology. Despite the sudden, uncomfortable ripples by AMD (whom will continue to supply x86 server chips as well), Intel’s x86 ecosystem is still widely used in datacenters around the world. Furthermore, the company has its own line of computing products to compete with GPU acceleration – the Xeon Phi coprocessors, launched in mid-2012. Intel’s processors and coprocessors use common languages, models, and tools – maximizing developer support and preserving software expertise across the Xeon product line.

IBM and Nvidia’s growing relationship is hugely important for each company. Neither on its own would likely make a significant impact in the markets for large and hyperscale computing with respect to Intel and ARM’s growing influence. However, bridging popular CPU and GPU architectures will bode well for CUDA and Power as OEMs increasingly employ heterogeneous architectures to realize the flexibility in simultaneous processing of serial and parallel workloads. 

Keep Calm and Carry On, Discrete DSPs

Processor technologies are undeniably moving towards consolidated, integrated architectures to take advantage of multiple processor types in tandem while saving on size and power consumption. While such heterogeneous ICs are changing the way chipsets are developed, licensed (core IP), and manufactured, not all applications need or necessarily want them. High-performance remains the chief metric by which end users select embedded processors, particularly for those using digital signal processors (DSPs). As a result, intensifying application requirements will preserve the market for discrete DSPs in a world increasingly prevalent with holistic system on chips (SoCs).

Signal processing requirements are escalating in a variety of industries. The continued expansion of 4G wireless technology throughout Europe, as well as increased 3G adoption in emerging Asia-pacific countries such as India, will fuel growth for high-performance DSPs in base stations and other access layer equipment. The greater demand to provide sufficient cellular quality of service and coverage is amplified by the tremendous growth in connected devices produced by the Internet of Things.

Dspblogchart

Though upgrading and deploying new backhaul networks will be the leading application for modern and next-gen DSPs, high-performance signal processing will still be crucial in other industries, such as consumer electronics and industrial automation. For example, the new Apple iPhone 5s includes a secondary coprocessor, the M7, embedded with DSP technology specifically designed to easily manage and migrate sensory workloads from the main processor – simultaneously increasing device performance and saving energy. Similarly in the industrial automation space, the growth in peripherals on the shop floor such as cameras and sensors for machine vision applications will necessitate the parallel processing performance of high-end DSPs.

Though several discrete processor markets continue to combat SoCs for design wins (including DSPs), the proliferation of heterogeneous processors has allowed traditional DSP suppliers to form new revenue streams through IP licensing, much like what is already done for CPU and GPU cores. Furthermore, new and growing application requirements will still require pure, optimized DSPs for the foreseeable future. For these reasons, we believe discrete DSP manufacturers can keep calm and carry on.

09/26/2013

Calling all Embedded Board & Module Suppliers – Stand up and be Heard!

The winds of change are blowing through the embedded computer board and module market. As 2014 approaches, recent market consolidations and reorganizations will solidify. 2013 will go down as one of the most important years for suppliers to make their voices heard by VDC. Why is this so important? In 2014 and beyond there will be significant shifts in demand for embedded board and module products as seen by these results from VDC's 2013 survey.

Boards and Modules

Many OEM engineers will be designing their new products and, for the first time they will be using embedded modules as opposed to other product alternatives. At the same time, existing module users will be looking to migrate to the latest x86, ARM and/or FPGA processors and module types. These OEM engineers will value an authoritative 3rd party source to better allow them to determine which product types are strongly supported and, which product types not gain the necessary market traction.  On a similar note, these OEM engineers and other entities with interest in the embedded market are looking to VDC to determine which suppliers are becoming the most resilient, innovative, and successful in the markets they serve. VDC provides this industry-wide guidance by taking a 360 degree view of the market but, even so, the supplier inputs we receive are, among the most valuable.

At present, VDC has been in contact with over 90 suppliers we believe are significant to the market and after many initial discussions have provided our top-level estimates for their company. We are following these e-mails and calls with our 2013 survey that has detailed estimates. If you have NOT received these e-mails, calls and survey and, you believe your company should be represented, please contact us immediately. If you have received this survey it MUST be received by Friday October 4 to ensure any corrections or fresh guidance are included in the VDC models that drive our reports.



 

09/12/2013

Intel’s Sub-Atom Quark is a Big Opportunity

Intel finally announced earlier this week its answer for branching into true low-power embedded applications – the Quark SoC. Quark is Intel's first product for low-cost, small form-factor embedded markets, which often prioritize power consumption and footprint over performance. Though Intel faces tough competition in the embedded space from a variety of players, the growing opportunity generated by the pervasive Internet of Things will be large enough for Intel to carve additional share outside traditional PC and enterprise-computing markets.

Quark is Intel's first synthesizable CPU core, allowing others to incorporate third-party silicon IP into the SoCs Intel will manufacture. Technical details are scarce, but what we do know is the demonstration chip presented at IDF 2013 is x86 compatible, measures 1/5th the size and 1/10th the power consumption of the Atom processor, and contains a single-core, single-thread 32-bit architecture.

The market opportunity for Quark is great. Connectivity is creeping into ever-more devices and applications, including the new SoC’s initial targets such as industrial automation, medical, and wearable devices. Though prospective customers can integrate various WiFi/cellular radios to enable connectivity, software remains the chief barrier to successful implementation. Intel hopes to mitigate development efforts through incorporating a software stack that includes security, manageability, and connectivity features. No word yet on supported standards.

However, IP-licensor ARM and processor suppliers such as Freescale and Qualcomm already have a firm foothold in several embedded markets. The ARM architecture has flourished in low-power designs, and now intends to challenge Intel in high-density server and high-performance computing markets. Quark is poised to compete with ARM’s Cortex-M series processors, the most used ARM architecture cited by over half of respondents from our 2013 Embedded Hardware End User Survey. The rich ARM ecosystem offers vast resources and integration support that no single company can completely encapsulate.

While there is no off-the-rack market for Quark, OEMs and other embedded processor end users will greatly value the advanced production process and additional features. The limiting factor of Intel’s success will be price. The company isn’t accustomed to selling lower-cost units beyond its Atom line – which are still too costly for many high-volume embedded applications beyond tablets and handsets. Nevertheless, we believe Intel’s strong brand and ample resources will allow Quark to penetrate several new embedded market segments once production begins later this year.

09/09/2013

Will the Hynix Wuxi Fire Impact the Embedded Computer Market?

We expect the September 4th  fire at a Hynix semiconductor fab producing DRAMs to adversely impact suppliers of embedded products. The perception of a possible supply disruption for DRAMs has already affected pricing. Although Hynix is downplaying the event, the likely affect will be somewhere between a temporary shortage and a larger industry impact that some investors are predicting. Suffice it to say that the potential mix between a cleanroom and “thick black smoke” would be disrupting, even if the equipment was not obviously damaged. It is possible that materials and surfaces in some parts of the facility were contaminated. If so, the effects of semiconductor or electronic circuit board contamination may take months or even years to appear and this would have two possible impacts:

  • Some of the Hynix production equipment may become less reliable, which can impact supply.
  • Some of the components Hynix produces at that facility may have higher failure rates, particularly over the longer time frames in which embedded computers are deployed.

Whether either of these two possibilities will actually happen may be immaterial as embedded board and system suppliers are a cautious bunch who may choose to lock in supplies of DRAM products from alternative suppliers.