As you may have seen on our blog or in our research, one trend that we have been following over recent years is the usage of multicore processor configurations within the embedded market. Clearly this is a topic that has received a significant amount of industry attention over recent years, largely led by silicon manufacturers’ marketing initiatives as well as the community’s general familiarity with the technology from PC and server class devices.
As many of you all know, the embedded market is a different animal entirely. The inherent diversity of the embedded market has likewise led to there being a wide range of processing requirements – many of which have not necessarily needed the additional processing power offered by the newer chipsets. In fact, if you go back and look at some of our research results from 2008 you can see that we are still behind engineers’ expectations regarding the rate of multicore processor adoption.
Beyond just any potential gaps in the broader embedded market’s organic need for the additional processing power provided by multicore process, there are however a few important factors that have contributed to the slower than expected uptick. For one – as I am sure many if not all of you have heard before – it can be a significantly challenging exercise to develop software that can efficiently exploit the full capabilities offered by multicore configurations, especially with most embedded engineers’ experience relegated to languages like C that were originally designed to support serial program execution.
Of perhaps even greater consideration to many OEMs is the amount of investment that legacy code bases represent. While it is certainly a challenge to develop software from scratch for these platforms, it can be an even more challenging endeavor to ensure that legacy software assets are ported correctly and that no previously undetected runtimes errors occur – so we expect that this has caused many OEMs to defer transition to multicore as long as possible.
So you might be left thinking ‘given these challenges, what’s so different now?’
At one level, I can say not a lot. The basic challenges are still there.
Now, however many OEMs can no longer afford to defer a move to multicore. Silicon road maps and their own software requirements are beginning to force their hand.
So the big question remaining is that since OEMs are finally moving to multicore en masse, are the tools, software, and techniques now widely available and mature enough to help them be successful?
Join us at Freescale Technology Forum next week to hear us discuss multicore development challenges as well as the potential solutions at a panel discussion Monday afternoon, moderated by VDC and featuring:
- Michael Christofferson, Director, Office of CTO, ENEA
- Robert Oshana – Director Software R&D, Freescale
- Mark Mitchell, Director Open Source Software, Mentor Graphics
- Bob Monkman – Director, Business Development, QNX
- Michel Genard – VP Tools and Lifecycle Solutions, Wind River
- Marcus Levy - President, Multicore Association
Comments
You can follow this conversation by subscribing to the comment feed for this post.