Supplier interviews for VDC’s 2013 Embedded Hardware Service
for Embedded Products are currently underway. As a result of a recent SGET (Standardization
Group for Embedded Technologies) announcement, we will now be including SMARC
as a separate form factor in VDC’s embedded COMs report. SMARC, formerly known
as ULP-COM, comprises a Kontron-proposed SGET standard for ultra low power COMs.
In 2013, Kontron has announced the release of 3 new SMARC products utilizing
one of Freescale, TI, or NVidia ARM-based processors. Somewhat similar in
appearance to the DIMM-PC COM form factor which originated with JUMPtec
(acquired by Kontron in 2002), SMARC modules are edge-connected rather than
pin-connected as are many other COM form factors.
We expect low power computing modules such as SMARC which take
advantage of new low power SoC products will find traction in many embedded
markets, particularly in M2M applications. OEMs should be very interested in
products that can be added to their existing platforms to add M2M
functionality. In cases where an OEM’s products were not future-proofed with
respect to available space or power supply capacity, being able to add new
computing modules that support M2M without costly retrofits can be a huge
advantage. In cases where M2M is being designed into a new system, these ultra
low power computing modules can add the necessary functionality without having
a huge impact on Bill of Material (BoM) costs.
We believe that VDC’s coverage of SMARC and similar embedded
devices is of critical importance, both to suppliers of those products as well
as to their customers. To put it simply, nobody wants to “bet on the wrong
horse.” For an embedded product standard to be successful, it would have to be
supported by several suppliers and purchased by a solid and wide customer base.
Given any uncertainty, customers and suppliers are more likely to commit their
money to proven products and standards, no matter how compelling the new developments
might seem from a technology standpoint.
In 2013, VDC will work with both suppliers and their
customers to determine which new products and standards are gaining traction
and which, if any, product types or standards are losing share. It should be a
very interesting year.
This week, Sony announced some details for the next
generation of the Playstation 3 (PS 3) game system. The new product will be
called the Playstation 4 (PS 4) which makes sense because once you build a
brand, you do not want to create confusion or disruptions with your customer
base. From a technical perspective brand continuity is not as easily
accomplished while making significant architecture changes between platform
models. This is precisely the issue that Sony could have with the PS 4 because
of the changes in embedded processing.
The PS 4 will now be using an x86 64-bit 8-core AMD Jaguar processor as opposed to the Cell
architecture used in the PS 3. In addition, a next generation AMD Radeon GPU
will provide 1.84 Teraflops of graphics processing. This embedded processor shift is attractive for
game providers because there is likely to be more x86 programming expertise
available than was the case with Cell, and the relative familiarity of the
processing and graphics capability should allow more projects to be feasible.
If there is a negative note it is that the change in
embedded processing architecture will result in the PS 4 not being directly backward
compatible with PS 3 games. In other words, the PS 4 will not be able to
locally run the PS 3 game disks. If allowed, the lack of compatibility could add a level of
complexity to existing PS 3 owner consumer decisions including:
Do I have enough physical space and unused TV connection ports
for both a PS3 and a PS4?
If so, will my all-in-one remote be able to
operate both of them without a problem?
If all of my PS 3 games will be obsolete, should
I wait to see what the new version of the Microsoft XBOX 360 is like before I
migrate to next generation gaming?
At what point will there be enough new PS 4
games for me to consider abandoning all my favorite PS 3 games?
It is here that Sony’s July, 2012 acquisition of cloud-based
virtual gaming supplier Gaikai makes sense because it can be leveraged by Sony
to mitigate the backward compatibility issue between PS 3 and PS 4. The user places a PS 3 game disk in the PS 4.
The PS 4 identifies the disk as being legitimate, and acts as an interface
between the game player’s activities, the local graphics display and the cloud
based processing resources. This cloud based architecture, if it performs well,
should mitigate the PS 3 to PS 4 migration problem, but we believe that questions
still remain. For example, the business model for revenues for those cloud
resources and who actually provide and pays them will be interesting questions.
Lastly, it appears from the PS 4 hardware description that
the AMD CPU and GPU selected by Sony will be purchased as separate components. There are significant product
design and performance advantages to combining these functions into a
single semiconductor die or package. In fact, this was a key product strategy when AMD
acquired graphics expert ATI in 2006. For Sony, having a separate GPU may allow
a more efficient architecture for the cloud-based PS 3 compatibility and other services.
This is not to say that embedded computing products are not
already found in the typical home. To be quite clear, embedded microcontrollers are
used in almost every new appliance that has any type of display, or has
features beyond the lowest cost bare-bones models. Embedded computing modules
and integrated systems, however, are generally not found in the home, as they
are much more expensive than functionally-comparable consumer products.
Furthermore, embedded computing products are usually designed with ruggedized,
but aesthetically plain, enclosures. Lastly, embedded computers usually have
the minimum hardware required for a given application and offer few, if any, extra
bells or whistles like CD-ROM or Blu-Ray burners. For these reasons, one might
assume that there was not much chance of embedded computing platforms gaining
traction in the consumer market. That is, until now.
As we visited AMD’s booth at last year’s Design/West
Embedded System Conference, we noticed that a company called Xi3 was showing a
modular computer that utilized AMD processors, called their “5 Series”. Xi3 was
demonstrating how these small, but reasonably powerful, modules could be
deployed in an array for supercomputing applications, as a ‘data center on a wheels’. Although our impression at the time was that these Xi3 units might not
be rugged enough for some military applications, the compact case size and
attractive form factor made some of us want to adopt one. As it turns out, we
were not alone.
There is buzz from the recent Consumer Electronics Show
(CES) that gaming company Valve is taking a financial interest in Xi3, and is considering
their modular computers for home use with its products. The Xi3 unit called
“Piston” has higher processor power, and is more graphically capable than versions
of the Series 5 Xi3 products that we saw in early 2012. With a base model starting
at ~$500 and a 240GB SSD version at ~$900, these Xi3 units are priced much
higher than similar capacity Xbox360 or Playstation 3 gaming products. On the
other hand, though, people used to pay two to three times these prices for the desktop
cube computer that Apple rolled out in 2001. These Xi3 products that were
originally developed for the embedded market are likely to be a lot more
reliable, while still having a sexy design that high end consumers will value.
With server and PC suppliers in many cases looking to expand away from traditional enterprise IT, consumer and SOHO markets by targeting embedded applications, Xi3 shows us that the tables can
be turned. It is certainly possible that additional embedded computer suppliers
will take some of their powerful and compact platforms and upscale them for the
luxury consumer market. This trend could get very interesting.
We just saw a review about Huawei’s new Ascend Mate SmartPhone that features a 6.1” touchscreen, and it was far from positive. In summary, CNN Money’s
Adrian Covert found the Huawei product’s market placement to be in the less
than ideal “Phablet” zone between phone and tablet. We agree with Adrian in one
area, it is probably not an ideal size for a phone. But, at the same time, we
believe this class of mobile product can possibly experience the same type of
success as 3M’s well known Post-It product.
Here’s a quick summary in case you did not know the 3M
Post-It story. A chemist at 3M was trying to create a super strong adhesive but
the formula failed for that application. It was only much later that the permanently
tacky but not so strong adhesive eventually found a consumer and business
market where it excelled. This is not to say that the Huawei product is, pardon
the pun, tacky. In our opinion, the Huawei’s 6.1” product would be an excellent
“Bring Your Own Device” M2M platform. It is just at the right balance where it easy
to transport but also where the larger display can function as a Human Machine
Interface (HMI) display. Furthermore, the larger form factor allows for a
bigger battery and longer time between charges. Here is how that might work in
a few m2M applications:
Industrial: Many
industrial machines have to be adjusted for operator ergonomics and
preferences. At the same time, due to multiple work shifts and operational
flexibility, machines don’t always have the same operators. The operator
arrives at the machine and places the mobile device in the docking cradle. The
device provides a customized HMI and the operators preferred machine settings
are transferred to the machine. The operator logs on and that act, coupled with
the possession of the registered device, serves as two-factor authentication. Many
operational processes can be enabled and enhanced by this type of M2M method.
Transportation/Automotive:
In transportation market for M2M, infotainment and telematic are two classes of
applications that would be a good fit for the Ascend Mate’s type of function
and form factor. If it were docked on the driver’s panel, it could transfer
driver preferences where it could optimize vehicle settings. Unsafe activities
like texting while driving or game playing would be locked out. The lock-out
feature would also make insurance companies very happy. This brings us to the
telematic applications where insurance will play a big part in M2M adoption. Drivers
can get insurance breaks if they continually exhibit safe driving habits. Since
products like the Ascend Mate are intended to be a phone, they contain the
necessary cellular connectivity for verifying safe driving. Since these devices
would be docked to the vehicle instead of embedded in the console, they could
move with the driver from vehicle to vehicle. That would work particularly well
for drivers that frequently use rental and/or have shared vehicles. By
providing driver and passenger mobile device docks as opposed to full
infotainment displays/systems, auto manufacturers could save themselves and
customers money. The passenger docks would, of course, allow full texting and
gaming functionality.
A few final thoughts:
Like many M2M solutions, universal
standards have to be set or these types of HMI applications and products will
never be transferable across and within markets.
Huawei reports that the Ascend Mate touchscreen works well when users are wearing gloves. This is a good attribute to have in many M2M markets.
As stated in the latest VDC Views report on M2M, in many applications such as those found in industrial settings, it is generally preferable to use embedded components designed for those markets as opposed those targeted for consumer products.
A few days ago, I posed the above question in a blog on these pages – and answered it, at least to a degree, by talking about single-atom transistors. Although one (count it – one) has actually been made, the technology is a long way from being ubiquitous.
However, like global warming and climate change, the single-atom “wall” is real. And we are rapidly approaching it. Use of GPUs for general-purpose computing is a hedge against the wall; these have far more transistors than conventional CPUs and facilitate parallel computing. Intel, NVIDIA and AMD are all pursuing this approach to supercomputing. But this isn’t a long-term solution; GPUs are faced with the same wall.
Intel is pushing toward the Moore’s law limit through cooperative efforts with several outside firms. Intel has invested a staggering US$ 4.1 billion in ASML, a Dutch semiconductor equipment manufacturer. The investment will ultimately yield Intel a 15% share of ASML, and provides US$ 3.3 billion for R&D to make “extreme ultra-violet lithography” or EUVL (using super-short wavelengths of UV light for the etching process) practical, and to develop 450-mm wafers (as opposed to today’s 300-mm wafers). The former will enable 10-nm processes, while the latter will reduce manufacturing costs. And Intel isn’t the only one; Samsung has followed suit with an investment in ASML, and Taiwan Semiconductor Manufacturing Company, Ltd. (TSMC) has also made a significant investment. TSMC purports to be the world’s largest independent semiconductor factory, and, although they are currently building three 300-mm wafer fabs, their current production is limited to 200-mm.
Increasing transistor density by shrinking their size is only one way of battling the approaching wall. TSMC and one of its rivals, GlobalFoundries (GloFo), as well as Intel and the rest of the usual suspects, are actively pursuing 3-D chip technology. 3-D chips have been made; Intel’s Ivy Bridge architecture utilizes 3-D technology. 3-D transistors, called FinFETs, promise to both increase speed and reduce power consumption.
3-D ICs
3-D integrated circuits, which will allow far greater transistor density in a given planar footprint, are on their way. However, fabrication of these is not a trivial matter. Early versions comprised stacking dice atop one another with an insulating layer between, and interconnecting the dies using a rather laborious process. This was called “Chip Stack MCM,” and didn’t produce a “real” 3-D chip. But, by 2008, 3-D IC technology had progressed to the point that four types had been defined, as follows:
(1) Monolithic, wherein components and their interconnections were built in layers on a single wafer which was then diced into 3-D chips. This technology has been the subject of a DARPA grant, with research conducted at Stanford University.
(2) Wafer-on-Wafer, wherein components are built on separate wafers, which are then aligned, bonded and diced into 3-D ICs. Vertical connections comprise “through-silicon vias” (TSVs) which may either be built into the wafers before bonding or created in the stack after bonding. This process is fraught with technical difficulties, not the least of which is relatively low yield.
(3) Die-on-Wafer, where components are built on two wafers. One is then diced, with the individual dice aligned and bonded onto sites on the second wafer. TSV creation may be done either before or after bonding. Additional layers may be added before the final dicing.
(4) Die-on-Die, where components are built on multiple dice which are then aligned and bonded. TSVs may be created either before or after bonding.
There are obvious technical difficulties and pitfalls, no matter which approach is used. These include yield factors (a single defective dice may make an entire stack useless; thermal concerns (caused by the density of components; difficulty of automating manufacture; and a lack of standards.
In my layman’s opinion, a new approach to 3-D technology may be needed before it becomes truly viable. Currently components are built on wafers through the selective removal of material. Construction of 3-D chips could be simplified through selective deposition of material rather than its removal. However, that’s beyond today’s state-of-the-art.
As we look at biological equivalents, though, it’s very clear that brains are 3-D structures. I doubt that true artificial intelligence can be realized in a relatively small package without the development of true 3-D chips. Moore’s law will ultimately stymie continued development of planar chip technology.
Stay tuned for part 3 – there’s a really interesting development out there!
In the last two days, I have written about two situational
awareness applications for embedded hardware products. The first was toward the
creepy side and the second much more acceptable on multiple fronts. I think you
will find this somewhat in the middle.
As the New Year’s Eve approaches, you might find this application
provides you with actionable intelligence that you can really use.
In Wednesday’s Boston Globe I read a story about a company
called SceneTap and they provide a downloadable application that can give you current
population and demographic information from local bars or hangouts that you
might be interested in. How does SceneTap provide this product? The answer is
quite interesting from both a technical and business perspective. Let’s look at
both sides:
Business Perspective:
SceneTap provides the downloadable application to users for free and it is
activated when users open an account using a valid e-mail address. To the
potential patrons, they can use the application to see if a place is “hopping” or
not. SceneTap earns revenues by selling the hardware and/or service to
establishments in a covered urban area. The value proposition to one of these
establishments is multi-faceted. First of all, if you have a place that is
relatively popular at least some of the time, it should increase business
because more people will know that it is a good place to go. If they have a
good time, they are likely to come back. The application can also provide
management with actionable intelligence about the exact demographics of patrons
during all hours, days, and/or events. Previously, this would have been done
with less accurate and manpower-intensive qualitative data. SceneTap could also
provide information on the application use either inside the establishment or,
at a minimum how many times users looked up the establishment. This data could
be used to send e-mail offers to targeted users and also be a source of
advertising revenue for SceneTap.
Hardware Perspective:
The SceneTap embedded hardware consists of people counters at the entry/exit
doors and cameras that provide facial images that are analyzed by local and/or
cloud-based computers. We believe that SceneTap can use something like Intel’s
AIM Suite as part of the solution stack. SceneTap stresses that their
application only performs facial analysis as opposed to facial recognition
which is a very important distinction for placement of this product on the
creepiness scale. So, if you are using the SceneTap application to determine if
a specific person is at a particular place, you will have to use Twitter, Facebook
or something else like a phone call for that actionable intelligence.
Other embedded hardware that could potentially deployed
as part of the solution stack would be microphones and DSP components to detect sound levels and Wi-Fi equipment that
could detect the number (but hopefully not identity) of SceneTap users inside
the particular establishment.
Final Thoughts: So, as
you look for a place to go on New Year’s Eve, you might think of using SceneTap
if they are covering your area. I am lucky because they do have coverage in
Boston, but I would probably use it in a way that SceneTap founders probably did
not originally envision. Since I am no
longer young, hip, or single, I would tend to use SceneTap to find a quite
place that is NOT hopping. Then, I won’t have to stand in a line to get in, wait
very long to get served, and most importantly, be able to hear the conversation
of my wife.
Have a great New Year’s everyone and best wishes for a prosperous
and fiscal cliff-less 2013.
In yesterday’s blog we looked at some pretty creepy
applications for situational awareness technology. Now, let’s look to how these
systems can be employed in a much more socially acceptable manner. In the wake
of the Sandy Hook School and Aurora Theatre tragedies, President Obama has made
a governmental call to action with a task force being formed to examine every
possible solution. Gun control will be considered as well as the NRA’s plan to
use more armed guards. Neither extreme
is likely to be a good standalone solution.
Embedded computing could be part of a more optimal solution.
Situational awareness technology similar to the signage and Verizon patent
could be used as part of surveillance and security systems. In large urban
areas, systems that detect and precisely locate gunshots are already being
successfully deployed. When these Gunshot Location (GSL) systems are coupled with remote controlled high
definition cameras it is makes it much more likely that a perpetrator can be
swiftly apprehended.
In an indoor setting like a school and movie theatre a GSL
system would be presented with challenges that they would not have outdoors.
The acoustics of walls and hallways will require sophisticated Digital Signal
Processing (DSP) capabilities to account for the echoes and make a precise
location. If we take the movie theatre into account, there would have to be
some feedback of the movie soundtrack back into the GSL system to ignore the
gunshots in the movie.
So, would this situational awareness and GSL technology be
the complete solution? Unfortunately the answer would be no. What it would be
is a mitigating factor allowing in some cases a much more timely response from
security and/or law enforcement. The emergency responders would immediately
know the identity and location of the threat. It is also possible that a
situation could be prevented if the situational awareness presented an
actionable alert to security before the shooting starts. Angry shouts, rapid
changes in the area population’s mood / emotions could be possible triggers.
Camera systems that identify possible guns being carried could also play a part
but they would need humans as a backup. Even so, these surveillance systems
would have the constant attention that no human could consistently apply over
long time periods.
Regardless of the exact details of President Obama’s task
force findings and the resulting US government legislation and response that follow,
there are a few elements that are certain. Money will be appropriated and visual
and acoustic data from widespread camera and microphone installations will need
to be tightly integrated to provide the actionable data. Therefore, there will
be clear opportunities in 2013 and beyond for embedded component, system and
software suppliers. Those suppliers that already are participating in Digital
Signage or Digital Security and Surveillance markets will have an advantage but
new or innovative technology can easily disrupt the incumbents so complacency
is not an option.
This
blog related to a Verizon patent application is a follow-up to a previous VDC
embedded hardware blog talked about the embedded computing capability being
added to signage. The
Verizon patent application is for similar applications/technology related to
set-top boxes.
If the Verizon technology described in the patent is deployed, that same type of technology mentioned in the
previous blog on signage might one day apply in your living room. The Verizon technology would monitor the TV
viewing area using microphones, cameras and or sensors. These sensors could be
located in the set-top box, TV, and/or mobile device. Verizon’s overall goal
would be to gain situational awareness of the TV viewers to allow targeted
advertising.
Deploying this type of situational awareness technology will
have to be done very carefully to avoid offending customers. The deployment will also need to be
extremely securely to avoid any risk that the system would be hacked and expose
customers to remote eavesdroppers/peepers. The risk that law enforcement would
want to leverage such as system for court approved wiretaps can also not be
discounted. To be clear, Verizon has only applied for the patent, there is no
indication that this is close to an actual product at this point.
Verizon certainly would not be the first company with home
intrusive-technology. If you have ever played Xbox360 Kinnect, you have seen
that it snaps pictures of game participants. In addition to showing them on-screen
after the game action finishes as entertainment, the Xbox 360 transmits some of
these back to Microsoft’s Azure cloud platform where the data is stripped of
identifying elements but is often used by game developers as a part of a feedback
process.
Embedded processors and situational intelligence will have
an increasing presence in our home lives and certainly some of this is likely
to be a bit creepy. In tomorrow’s blog, we will examine how this technology can
be deployed for excellent non controversial causes such reducing the number and severity of
tragedies like Sandy Hook and the Aurora Theatre.
In case you’ve been living under a rock and don’t know this, in 1965, Intel co-founder Gordon E. Moore predicted that the number of transistors on an integrated circuit would double approximately every two years. Empirically based on economic factors as well as technical ones, his observation and conclusion has been so accurate that it has been given the title “Moore’s Law.” Certain pundits continually predict that we are reaching the end of the trail, and that the trend cannot continue because miniaturization technology will reach its limit. (It should be noted that Moore’s Law doesn’t specify the size of the IC die; logically one should be able to fit more transistors on a larger die – but that’s another story.)
The Ivy Bridge architecture, which utilizes a 22 nanometer fabrication process, comprises Intel’s product offerings for 2012. The firm’s next generation micro-architecture, code named Haswell, is expected to arrive in 2013 and will continue to use the 22 nm process. In 2014, the process will be shrunk to 14 nm with Roswell. Down the road, the process is expected to shrink even more, getting down to 10 nm by 2018.
How long can this go on?
Well, there is certainly at least one real limit, which is the size of the transistors themselves. A combined team of researchers from the University of New South Wales, the University of Melbourne and Purdue University has recently created a functional transistor comprising a single phosphorus atom. Furthermore, they have also developed a wire made from a combination of phosphorus and silicon, one atom tall and four atoms wide, that behaves like a copper wire. Granted, this technology is far from practicable at this point in that it has to be maintained at a temperature of minus 391 degrees F, but it does show what is possible.
As circuits get smaller and smaller, other laws of physics come into play, causing additional technical problems. Dr. Michio Kaku (surely you’ve seen him on TV - if not, you should!) of CCNY says that, once transistors shrink to 5 atoms wide (projected for 2020) the Heisenberg Uncertainty Principle will come into play. This states that it is not possible to know both the position and velocity of any particle; that one can only know one or the other. Thus one cannot know precisely where an electron actually is, and therefore cannot confine it to a wire. Since free electrons can’t be allowed to go bouncing about in any logic circuit because they may cause shorts (or, at least, logical errors), this may prove to be a practical limit.
Some pundits have theorized, though, that getting down to these sizes may allow the development of true quantum computing, wherein information is processed on a more-than-binary level. This remains to be seen.
There’s a lot of interesting stuff going on in this space. Some practical, some not so much. Stay tuned, as I plan to do a couple of additional blogs on this subject before I retire sometime next year.
Smarter Planet. Intelligent Systems Framework. Surround
Computing. These are the phrases that IBM, Intel, and AMD have developed for
their visions of the future of connected devices – the so-called Internet of Things.
While all these visions may have subtle and competing qualities, there is no
doubt that the big picture implications are largely the same: more devices,
more connectivity, more intelligence. All these concepts are contributing to a
future in which embedded technology takes a greater role in our lives. What do
these competing visions have in common and how do they differ? To better
understand this question, we summarize below our interpretations of these
marketing frameworks.
Intel – Intelligent Systems Framework (ISF)
Intel’s ISF establishes that the era of discrete embedded
devices is over and the era of intelligent systems has begun. The lines between
consumer, enterprise, and machine data are now blurring. This results in a
number of challenges including device connectivity, security, and management.
These three concepts are what the ISF seeks to address, and this framework
incorporates elements from a number of groups within Intel including McAfee and
Wind River. The ultimate goal of the ISF vision is to provide a scalable
architecture that extends across multiple platforms and systems seamlessly
while reducing cost and fragmentation. The ISF sets the stage for the next
level of accessing value, which is Big Data.
AMD – Surround Computing
AMD’s vision for the future of embedded computing centers more
on the user interface and responding to the individual user’s needs. AMD
envisions a future that is less based around the traditional interfaces such as
keyboards and more based around sensors that trigger automated requests. The
end user will interact more through touch displays and voice commands than
through the traditional computer interfaces. It also assumes more automated
transactions that free the user from the time and hassle of basic payment
transactions. Obviously, technology for both facial and voice recognition will
be critical in this future vision, and likely require intensive graphics
capabilities, which AMD’s GPUs or APUs would obviously support. In general, it’s
a more end-user focused vision than ISF, and one that plays to AMD’s strengths
in graphics and integration.
IBM – Smarter Planet
IBM’s concept of a Smarter Planet has been around for some
time, and is perhaps the most wide-ranging of all. It focuses more on the
disruptive nature of cutting edge technology and how it is turning industries
on their head and unlocking value in unexpected areas. Several themes are
encompassed in Smarter Planet including Big Data, integrated solutions, technology
awareness both at the individual level and within enterprise, the omnipotence
of the cloud, mobility, security, and intelligent devices. It’s a multi-faceted
approach that looks at both vertical markets and global trends, and one that is
fitting for a global services provider such as IBM.
Whether you look at the world through the lenses
of ISF, or Surround Computing, or Smarter Planet, there are a number of common
themes. One is connectivity. More devices are connected every day and more data
is passed between devices than ever before. Another is interactivity. Whether it
is managing devices remotely, directing them with voice commands or giving
companies a better understanding of their customers’ needs, interactivity is on
the rise everywhere. Finally, there is Big Data. The idea that all the
information that these connected devices are trading can help us better
understand the world around us.
Recent Comments