48 posts categorized "Security"

11/14/2014

Automotive Privacy Protection Principles Don't Go Far Enough

The Association of Global Automakers and the Alliance of Automobile Manufacturers jointly announced on November 13, 2014 a set of voluntary “Consumer Privacy Protection Principles.” (See the press release here, and download the principles PDF document here.)

The document is written in quasi-legalese, but in essence, it’s a pledge by automakers, beginning with the 2017 model year, to among other things:  ConsumerPrivacyProtectionPrinciples

  • inform consumers about how data collected from their vehicles will be used
  • obtain “affirmative consent” for certain ways that data might be used
  • anonymize aspects of the data under some circumstances

VDC applauds the auto industry for recognizing the importance to consumers of privacy for data collected by electronic and digital technologies, which are growing by leaps and bounds in new vehicles. However, the principles don't go far enough in several respects:

Security – The document states that participating members must “implement reasonable measures to protect Covered Information against loss and unauthorized access or use,” then says that “reasonable measures include standard industry practices.” The word reasonable is too wishy-washy in this context, so those statements in the privacy principles don’t inspire confidence that automakers and their partners will go the extra mile for data security. (Why don't the principles say the members must "implement strong measures" to protect the data?) Without defining any minimum security measures or committing to create or adhere to an ISO standard, it comes across as a nice way of saying, “We’ll make a good effort at security, but don't expect us to guarantee the data won't get breached.” In addition, security issues apply for data within vehicles' internal systems, for data during communications from vehicles to infrastructure, and for the databases where the manufacturers will aggregate and store the data. Security policies should specify minimum requirements for how data will be secured at each of these levels, as well as how authorized third parties with data access will be required to secure the data.

Consent – The document states that automakers need to obtain consent to “a clear, meaningful, and prominent notice disclosing the collection, use, and sharing of Covered Information.” However, the document includes no provision for a vehicle owner to deny such consent or revoke it afterwards. Why would that be important? Because the consent form is likely to be presented to consumers among a stack of numerous papers that they sign in a perfunctory manner when buying a car. In addition, consent ideally would provide vehicle owners with the ability to agree or not to agree to each type of data collected, rather than any blanket statement of consent to collection of all data. We’ll see how this plays out when the first consent forms hit the market.

Data Access – The document says that consumers will have “reasonable means to review and correct Personal Subscriber Information.” Such information may include name, address, telephone number, email address, and even credit card number. It’s fine that automakers will give consumers the right to access the data that they themselves provided in the first place, but what the document misses entirely is the basic principle that consumers should have the right to access data produced by their own vehicles. Although this isn't a data privacy issue, it is a data rights issue that automakers need to address. In VDC’s opinion, vehicle owners should have, for example, the ability to take diagnostic data to an independent mechanic, rather than manufacturers only providing such data to its dealers or third parties that have paid to access it. And vehicle owners should have the ability to access geolocation data generated by their own vehicles. Certain types of data may need to be kept confidential, but the default should be to provide consumers access to data from their own vehicles unless there’s a legitimate safety reason not to make it available to the people whose vehicles generated it.

For further discussion of data rights issues related to the automotive industry and the Internet of Things, see the recent VDC View article entitled, Beyond "Who Owns the Data?" 

10/02/2014

Notable Demos from ARM TechCon 2014 and JavaOne

Semiconductor intellectual property supplier ARM kicked off its annual TechCon conference and trade show in Santa Clara, CA with expansion of the mbed IoT Device Platform, including a free operating system as well as server side IoT technologies. We describe the mbed OS in more detail in a separate blog post. In this post, we’ll highlight a couple of notable demos from other vendors on the show floor, plus one from Oracle’s concurrent JavaOne conference and trade show in San Francisco. In a literal sign-of-the-times at both events, you couldn’t swing a dead cat without hitting a sign that read, “Internet of Things.”

At ARM TechCon the Cryptography Research division of Rambus showed an interesting demo of differential power analysis (DPA). This is a type of side channel attack based on sensing power consumption and/or emission patterns of a processor during cryptographic operations, with the objective of extracting encryption keys. Prior to seeing this demo, we had thought of DPA as an academic or theoretical exercise that either wouldn't work in the real world or would take so long as to be insignificant. But Rambus showed us exactly how it works, measuring power emissions from a Xilinx chip while it was repeatedly performing AES 128-bit symmetric encryption on short blocks of data, and running statistical analysis on the power readings to uncover the encryption key one byte worth at a time. The entire key was recovered in about two minutes. Because the process is linear with the number of bits, AES 256-bit only would have taken about four minutes. (To break 256-bit encryption by brute force methods is orders of magnitude more difficult than to break 128-bit.) In addition, the company demonstrated a simple power analysis (SPA) side channel attack by handholding a receiver antenna to the back side of an Amazon Kindle tablet (see Picture 1), and directly reading a signal containing the asymmetric key from RSA 2048-bit encryption running in software on the device. No statistical analysis was required, as a viewer could see a graphical form of the signal representing the zeros and ones (Picture 2).

Rambus3

Picture 1: Readng power emissions from a Kindle tablet running encryption software.

Rambus2

Picture 2: Kindle emissions isolated in the frequency band of the encryption key. Narrow distances between the gaps are zeros, and wider double distances between the gaps are ones. The section of the key shown reads as 0011000001.

Needless to say, we were impressed with how easily these attacks were executed, which of course was the entire point of the demos. Rambus offers side channel attack countermeasures in the form of hardware cores and software libraries, and the demos also showed how such countermeasures confound the measurements and analysis.

Green Hills Software teamed up with Freescale to demonstrate a retail POS simulation of a RAM scraper malware technique similar to the type implicated in the Target data breach, as well as a solution using INTEGRITY hypervisor to negate the malware. The protection method keeps the credit card data encrypted in a secured application space before it gets to the normal execution area, then sends the data via secure channel to the payment processor server. Only a one-time token is passed to the normal execution area of the POS system. When that token is submitted to the payment processor, the funds are approved, without the credit card data ever existing in the normal execution area, and thus rendering RAM scraping irrelevant for theft of the card data. This type of tokenization has been done before in the payment card industry (PCI), and we expect the Target data breach will increase its uptake.

And at the JavaOne event, Oracle and the French software firm Oberthur Technologies demonstrated an Android device with a Java Virtual Machine running within an ARM TrustZone trusted execution environment (TEE). Oberthur’s software runs on the server side of an Internet connection, and enables specially designed Java apps to be securely installed into the device also using a tokenization method. This is the only solution we’ve seen to date that enables applications to be remotely installed into a TEE. Although the demo was run on an Android phone, we see the potential for its use in many other types of IoT devices.

08/07/2014

IoT Lessons from the Russian CyberVor Hacking

Widely reported during the first week of August was the revelation that a group of Russian hackers known as CyberVor had amassed a database of 1.2 billion usernames and passwords, as well as more than 500 million email addresses. The New York Times originally broke the story, based on findings from the firm Hold Security. Unlike the Target retail data breach of late 2013 and the more recent eBay breach, CyberVor’s loot is not the result of one or two large breaches, but rather a large number of breaches of all sizes. Hold Security says that the data came from 420,000 websites, ranging from large household-name dotcoms down to small sites. Most of the sites were breached using SQL injection techniques through malware infecting the computers of unwitting legitimate users.

Breaches of major websites or retailers tend to be highly concentrated, narrowly focused efforts, whereas the database collected by CyberVor appears to be the result of casting a very wide (bot)net, trawling the world wide web for anything the group could catch.

What lessons can the CyberVor revelation teach us (or reinforce) about the Internet of Things?

Lesson #1: No IoT site (either physical or virtual) is too small to be attacked. Many users are tempted to think, “Why would anyone bother to hack my little IoT network?” The answer is, “Because they can.”

Lesson #2: Even data that has little or no value to hackers on its own may have value when aggregated.  If you think your data is worthless to others, you’re probably wrong. Big data is comprised of a whole lot of little data.

Lesson #3: Authorized users or devices are not necessarily safe just because they are authorized. Follow the principle of least privilege, in which users or devices only have access to the minimum amount of data and system resources necessary to perform their functions.

Lesson #4: Monitor your networks for atypical or unexpected movements of data. This is challenging in practice, because valid usage occasionally may not follow past patterns. Nevertheless, at a minimum the system should have a way to throw up a red flag if a user or device is attempting to copy large portions of a database.

Lesson #5: Don’t neglect the basics. SQL injection attacks as well as buffer overflows and cross-site scripting are common and easily preventable. Most software code analysis tools can check for vulnerabilities to such attacks early in the development process.

Lesson #6: Conduct independent penetration tests on your devices and networks. If you think that your own engineers already have covered every possible attack vector, you’re probably wrong. You need outside eyeballs incentivized to find flaws without concern about stepping on coworkers’ toes.

And lastly, Lesson #7: At the risk of stating the obvious, encrypt your data. Any database that is accessible either directly or indirectly from the Internet is worth encrypting. Passwords in particular are keys to the kingdom. Encrypt them with salted hash techniques and strong algorithms. There is never a valid reason to store passwords in plain text.

If the websites breached by CyberVor already had learned these lessons, the hack wouldn’t even have been newsworthy.

For more insights into IoT security issues, check out VDC’s research program on Security & the Internet of Things.

05/22/2014

eBay Response to Data Breach Shows the Company Still Doesn’t Get It

This month’s major data breach news comes courtesy of hackers who accessed eBay’s user database by using valid credentials pilfered from eBay employees. The hackers apparently had access to eBay’s entire database of 145 million active users during the months of February and March 2014. The information accessed included passwords in encrypted form, as well as names, email addresses, shipping addresses, and dates of birth all in plaintext.

eBay’s user database was apparently accessible to the hackers because they logged in using genuine eBay employee credentials. But why should that give the hackers unfettered access to the entire user database? Of course company employees may have valid reasons for accessing the user database, but eBay could have limited the access such that:

  • a separate password or two-factor authentication was required to gain entry to the database; and

  • the database was only accessible from whitelisted terminals

  • excessive access by any individual employee throws up a red flag immediately (not months later).

eBay’s IT department has a chance to address those issues, but the company’s public relations department hasn’t done too well thus far.

eBay posted a notice on its website regarding the breach, entitled “Important Password Update,” the full text of which is below.

In VDC’s opinion, eBay’s public response to the breach has missed the mark.

eBay’s notice informed users that their encrypted passwords might have been compromised, and instructed them to change the passwords. Since the passwords were encrypted using a “salted hash” technique, few if any actual passwords are likely to be decrypted. Nevertheless, it doesn’t hurt to tell users to change passwords, particularly if a user shares the same password across multiple websites. However, the notice failed to mention the other personal information (non-encrypted) that was compromised. Such personal information presents a risk that hackers could attempt identity theft, which is arguably a greater concern than just the compromise of one site’s password. In effect, eBay has warned users about the information that is probably still safe, and ignored the disclosure of information that is clearly unsafe. And by failing to mention the other personal data that was accessed, eBay is creating a false sense of security that users will be safe if they just change their passwords.

Password changes can help make eBay safer, but they don’t improve the security of users whose personal information has already been appropriated. Because disclosure of users’ personal information could lead to subsequent attempts at identity theft, eBay might need to offer up free credit monitoring service to its users, even though no credit card or other financial information was disclosed.

Users don’t necessarily care how safe and secure eBay is; they care how safe and secure their own personal information is. eBay’s response thus far indicates that the company doesn’t get the distinction.

 

Full text of eBay’s notice to users:

[Note several days after we posted this, eBay revised the text of its password update notice to include the fact that personal data beyond encrypted passwords had been compromised, although eBay still doesn't relate the implications of that to its members. The text below is eBay's original notice.]

Important Password Update
Keeping Our Buyers and Sellers Safe and Secure on eBay
On Wednesday, we announced that we are asking all eBay users to change their password. This is because of a cyberattack that compromised our eBay user database, which contained your encrypted password.
Because your password is encrypted (even we don’t know what it is), we believe your eBay account is secure. But we don’t want to take any chances. We take security on eBay very seriously, and we want to ensure that you feel safe and secure buying and selling on eBay. So we think it’s the right thing to do to have you change your password. And we want to remind you that it’s a good idea to always use different passwords for different sites and accounts. If you used your eBay password on other sites, we are encouraging you to change those passwords, too.
Here’s what we recommend you do the next time you visit eBay:
  1. Take a moment to change your password. You can do this in the “My eBay” section under account settings. This will help further protect you; it’s always a good practice to periodically update your password. Millions of eBay users already have updated their passwords.
  2. Remember to always use different passwords on different sites and accounts. So if you haven’t done this yet, take the time to do so.
Meanwhile, our team is committed to making eBay as safe and secure as possible. So we are looking at other ways to strengthen security on eBay. In the coming days and weeks we may be introducing new security features. We’ll keep you updated as we do.
Thanks for your support and cooperation. eBay is your marketplace, and we are committed to keeping it one of the world’s safest places to buy and sell.
Devin Wenig
President, eBay Marketplaces

 

04/23/2014

Exploiting the Exploit: The Marketing of Heartbleed

No doubt anyone reading this post is already aware of the Heartbleed bug affecting OpenSSL implementations of the TLS Internet security protocol. Heartbleed has received massive press coverage –deservedly so given its potential implications for a significant portion of web sites and Internet-connected devices. We won’t belabor the technical details of the bug, which are summarized nicely at Heartbleed.com. What we will discuss is how Heartbleed has been publicized. To the best of our knowledge, Heartbleed is the first computer systems bug to have both its own website and its own logo, the cute bleeding heart. As such, Heartbleed sets a precedent that will have both positive and negative ramifications for future vulnerabilities and malware.

Heartbleed2The Heartbleed website and logo were developed by the Finnish company Codenomicon, which makes fuzz testing software and provides security test services. Although the bug, officially dubbed CVE-2014-0160, was independently discovered by Neel Mehta of Google and several engineers at Codenomicon, the latter company is the one that turned it into a household word. Even among the vast majority of the population who have no idea what OpenSSL is, people everywhere quickly found out that a major bug could compromise their Internet security. For that, Codenomicon deserves thanks.

In addition, the Internet industry commendably jumped into action, with some websites being patched even before the disclosure became public and many other sites within a few days. (Patches to potentially affected embedded devices may take years, but that’s another story, and the process by which certain firms got early notification of Heartbleed is yet another...)

Despite the cooperation of Internet powers in addressing Heartbleed, VDC sees several disconcerting implications in the way the bug CVE-2014-0160 became Heartbleed the logo.

First, Codenomicon undoubtedly got a huge boost in its profile by virtue of its role in publicizing Heartbleed. Therefore, we anticipate that other security firms will seek similar attention when they discover significant vulnerabilities. We wouldn’t be surprised if discoverers prepare websites and logos before they even disclose the bugs, then flip the switch to launch their sites instantly upon disclosure. That may again produce rapid, coordinated reaction to fix the problem, but it raises questions about possibly overstating the risks associated with lesser vulnerabilities in the name of garnering publicity.

The Heartbleed bug was a biggie, deserving of widespread attention, whereas most bugs are rather mundane. Flaunting them won’t quite constitute crying wolf in the absence of threat, but it may be the equivalent of crying wolf when there’s just a loose dog poking around among the sheep.

Second, prankster-level hackers could conceivably set up fake vulnerabilities web pages, causing temporary wastes of much effort and energy before being debunked. That’s the equivalent of yelling  “Fire!” in a crowded theater.

Third, and most egregious, would be malicious hackers who publicly announce a vulnerability (either real or fake) for the purpose of exploiting a different vulnerability while everyone is distracted with the first one. That’s yelling “Fire!” (or actually setting a fire) in the theater so they can rob a bank across town while the police and firemen are occupied. Password phishing email campaigns can already come in swift response to disclosure of real vulnerabilities. Now, we anticipate hackers coordinating both the disclosure and the phishing campaigns.

Sad to say, despite all the benefits of renewed examination of security protocols that will come out of the Heartbleed bug, there remain many who will seek to maximize their own gains by learning from the reactions of others.

12/10/2013

The AllJoyn Protocol: Does Its Openness Compromise Security?

On December 10, the Linux Foundation announced the formation of the AllSeen Alliance, an industry consortium that seeks to expand the Internet of Things in home and industry. Premier members include: Haier, LG Electronics, Panasonic, Qualcomm, Sharp, Silicon Image and TP-LINK, with more than a dozen additional community member companies.

The members plan to adopt an open-source peer-to-peer communications framework called AllJoyn, originally developed by Qualcomm Innovation Center and launched back in 2011. Qualcomm has now contributed AllJoyn to the Alliance. AllJoyn is hardware agnostic and can run on multiple popular OSs including Linux, Android, iOS, and various Windows desktop and embedded versions (despite the Alliance being announced by the Linux Foundation). You can find technical details of AllJoyn at www.alljoyn.org, so we won’t describe the protocol at length here.

AllJoyn enables devices to interact at the app-to-app level. The protocol handles much of the communication over ad hoc proximity networks, such as Bluetooth and Wi-Fi, with the ability to mix and match devices with different communications protocols, so that apps don’t have to deal with the lower level functions. Qualcomm’s early emphasis was to enable multi-player gaming across a variety of unlike devices, but the AllSeen Alliance seeks to foster adoption across a much broader range of devices in “the Internet of Everything.”

AllJoyn facilitates authentication and encrypted data transactions between devices. But how will AllJoyn prevent unintended devices from joining a group of devices given that the protocol was designed to make device discovery and connectivity as easy as possible?

In the case of Wi-Fi, assuming that the network is set up with proper Wi-Fi Protected Access (WPA), AllJoyn doesn’t make it any easier to gain access to the network without the security key, particularly if the network is set up to allow only whitelisted devices. For Bluetooth, a hacker within range (about 10 meters) conceivably could spoof the identity of a known device, to trick a user into accepting it into the network. In conventional Bluetooth communications, once devices are paired and connected, they could have free reign over numerous applications on each other. With AllJoyn, the protocol can be used to limit which apps can talk to each other on which device. In that sense, AllJoyn should actually increase the security of Bluetooth devices. When combined with encrypted communications, no security holes are obvious (although it’s best to assume that hackers will discover some).

In addition, AllJoyn devices are able to communicate with each other in the absence of any Internet connection, which in certain scenarios will eliminate entire realms of security risk.

VDC expects that the AllSeen Alliance will succeed in gaining acceptance of AllJoyn for consumer electronics and home control applications. But the very names AllSeen and AllJoyn imply a degree of openness that won’t inspire confidence among industrial and critical infrastructure users. The convenience advantages of AllJoyn probably won’t outweigh security concerns for those users.

Secure Your Software Supply Chain

The rapid growth in software-driven content for embedded devices is not new - nor is the recognition that connectivity and the Internet of Things are fundamentally changing the ways that OEMs deliver value to end clients.

The ways in which OEMs are responding to these new content and feature creation requirements, however, are adding new layers of complexity to the SDLC - and vulnerabilities - to their products. While many engineering organizations are scaling internal software development efforts and receiving a increasing percentage of their code bases from third-party sources, they are often not placing proportional investments into their security and quality assurance processes and tools.

Code Sources

 

While there is no silver bullet to eliminate code defects and vulnerabilities, the best practices to develop high-integrity software are no secret either. Solutions like static analysis tools and premium requirements and variant management tools can help OEMs limit the introduction of some defects and identify many others in advance of product deployment. In an industry where connectivity and security risks are increasing dramatically with each product generation, engineering organizations must recalibrate their risk assessment calculus and prioritize software defect and security vulnerability mitigation.

Tomorrow, Wednesday December 11th, I will be digging more into these trends and challenges facing our industry during a webcast at 2pm ET, sponsored by Klocwork.

 

Register herehttp://bit.ly/1hZoaGs

 

 

11/29/2013

The Foibles of Fingerprints

When Apple announced the iPhone 5s in September 2013, much of the popular press hailed the device’s inclusion of fingerprint sensing (dubbed Touch ID) as a major breakthrough in mobile security. The more astute journalists pointed out that Motorola had brought to market fingerprint scanning in the Atrix 4G handset back in February 2011, more than two and a half years earlier. As an owner of the Atrix 4G since its early days, I can provide some insight into the real-world ups and downs of using a fingerprint scanner on a daily basis, although the proliferation of fingerprint devices presents greater security concerns.

In terms of usability, the fingerprint method clearly surpasses PIN or password or pattern input as a way to unlock a mobile handset, particularly when it’s a function that gets executed dozens of times a day. It’s one of the reasons that I have hung on to the Atrix 4G as one of my phones for this long.

FingerprintSensorMotorola Atrix 4G fingerprint sensor

A couple of scenarios confound the Atrix 4G’s fingerprint recognition. One is short term changes in fingertip skin, such as from recently wet hands that distort the skin (an extreme example being “prune finger” from shower or bath) or otherwise cause moisture-related problems for the capacitive finger sensor. (In this type of sensor, the fingerprint image is generated by electrical rather than optical differences between ridges and troughs.)

Another problem appears to be seasonal, in that skin condition varies enough from summer to winter here in New England that I have to recalibrate the handset with a fresh set of print samples a couple of times a year. A device with more sophisticated pattern recognition algorithms and more powerful processing might be able to account for such variability, and perhaps the iPhone 5s is better than the Atrix 4G in that regard.

No doubt law enforcement uses more elaborate techniques for matching prints, but as a consumer device, the Atrix 4G does remarkably well, correctly recognizing my print more than 95 percent of the time on the first swipe (i.e. fewer than 5 percent false negatives). The likelihood of false positives, that is someone else’s finger successfully unlocking the phone, is effectively zero.

Sure, a determined attacker could poach a fingerprint from somewhere else and dupe it onto the sensor, as was widely publicized when a group of hackers successfully accessed an iPhone 5s that way only a few days after the product’s release. However, the odds of that actually happening to a phone in the wild are slim, as long as the handset maker doesn’t build the housing out of a glossy plastic that’s a fingerprint magnet. The odds are probably higher that an attacker would pick up a user’s PIN or password just by watching over the shoulder.

A much greater risk would be if hackers managed to distribute malware via an innocent looking app that uploads fingerprint data to a central server where it could be used for other nefarious purposes. Even if the fingerprint images stored on the handset (Data At Rest) are adequately encrypted, a smart enough attacker with the right level of access might be able to capture the raw data from the sensor as the finger is scanned (Data In Motion). Embedded devices of any kind that include fingerprint recognition need to be designed from the start to prevent such access. (Companies such as AuthenTec offer on-sensor encryption.) In addition to critical infrastructure like energy grid and transportation management, fingerprint sensors increasingly will appear in multi-factor authentication for broader embedded applications for financial transactions, building access, medical records, biotech laboratories, home security, and a range of consumer electronics products.

Theft of one person’s fingerprint would be an immense hassle for that individual but not a societal threat. A method of surreptitiously capturing prints from thousands or even millions of consumers could present a massive security nightmare, especially since those prints later could be employed on other devices for which a user has fingerprint access. All it would take to expose such a risk would be one consumer electronics manufacturer that shortcuts the design of one popular product to save a little on development time or BOM cost.

Users don’t have the option of resetting their compromised fingerprints as they do their passwords, and they don’t have the option of using different fingerprints to access different systems, at least not beyond the limit of two hands’ worth. Ironically, fingerprints may become less secure in the long run than other forms of authentication. In the meantime, I’m hanging onto my phone.

08/19/2013

Trusteer Your Security to IBM: Acquisition Fortifies Security Portfolio

On August 15th, IBM (NYSE:IBM) announced it reached a deal to acquire Trusteer, a Boston-based software-security firm focusing on financial and enterprise cyberthreats. As part of the deal, IBM will absorb Trusteer’s R&D lab in Tel Aviv into its security organization. One major focal point for Trusteer is their mobile security product line, which focuses on preventing intrusion and data theft through enterprise-connected mobile devices.

Smartphones and tablets are becoming integral tools for large and small businesses alike. Mobile devices – like an iPhone equipped with the SalesForce app – are a huge benefit to employees and their employer by allowing them to work remotely and efficiently while away from the office, but these devices also introduce a new set of vulnerabilities into an organization’s security. Our data shows that a large number of these devices have exploitable security flaws that leave sensitive enterprise data vulnerable. A mobile device connected to an enterprise’s network provides a link into the organization that many aren’t adequately protecting.

This acquisition reinforces two key trends: security is an increasingly important factor for all organizations and more needs to be done to protect valuable data from theft. As the number of end-points an organization deals with increases, so does the risk for a security breach. IBM recognizes this and plans to use the Trusteer acquisition to improve its enterprise security products, but the same principles hold true in the embedded industry.

The embedded world is more connected than ever before and this trend continues to grow. Thinking back to famous malware threats such as Stuxnet infiltrating networked manufacturing platforms, it’s clear that inadequate protection of these systems is a major vulnerability to users of embedded software and hardware. Purchasing Trusteer highlights a developing industry trend: end-point protection is becoming a new priority for businesses, embedded or enterprise, in order to keep cyberthreats from harming their operations.

For more information on VDC’s research about security in the embedded industry, click here.

 

By Zach D. McCabe,

Research Assistant, M2M & Embedded Technology

04/25/2013

M2M World Congress – London – Highlights from Day 1 of 2

VDC’s CEO Mitch Solomon is participating in M2M World Congress (one of the industry’s larger M2M-centric conferences) this week in London, and sent in the following post from the field.

First off, the event is oversold and is standing room only, a testament to building interest in M2M (…and perhaps the strong promotional efforts of its producer).  The day consisted of roughly a dozen presentations and panels, covering a broad landscape of topics.  Speakers were largely from major wireless carriers, primarily European.  Below are a few key insights (…derived from a much longer list), just hours after the last session of the day:

All speakers believe the much-anticipated M2M future has arrived, and they see rapid scaling in their business (as measured by M2M SIM card sales and deployments).  Most M2M business leaders within large mobile network operators are carrying aggressive growth targets (handed down from corporate), as their companies look to M2M to drive growth that far exceeds what can be achieved in their established voice and data businesses.

The words “complexity” and “challenges” were used almost as much as “the” and “it” during the course of the day.  The difficulties associated with actual M2M deployments were widely acknowledged, often in the same breath as the notion of how large the opportunity is.  Clever solutions to the biggest M2M deployment challenges were elusive (understandably, as silver bullets are usually are hard to come by), though familiar suggestions like “test, test, and re-test” and “standards can help” and “pilot first, then expand” were offered up. 

The only word used more than “complexity” and “challenges” was…”partner.”  Which makes sense.  It often takes partnerships to solve complex technical problems such as M2M applications.  Every carrier was touting its partnerships, some of which extend geographic coverage while others deliver value-added software and services beyond connectivity.  This is the age of M2M promiscuity, as everyone tries to seduce everyone else lest someone be left on the dance floor alone.

For a myriad of reasons, the discussions were largely focused on technology and vendor strategies (particularly carriers’) instead of OEM use cases and customer benefits (…something many audience members were a bit frustrated by).  Some attempts by panel members to address questions related to devices and OEM use cases were made, and some light was shed.  Overall, however a clear impression was made that senior people with M2M on their business cards are still working their own way up the learning curve (like many others in the industry) when it comes to specific examples of how M2M-based applications can benefit their OEM customers.  This knowledge gap could be indicative of carriers and/or senior leaders at carriers being one or two steps removed from OEMs’ application development efforts, rather than a deficiency in an expected area of expertise.

With the second and final day of the event tomorrow, my hope is that panel members will share more about how OEMs are approaching, evaluating, designing, and deploying M2M based systems.  Discussions of the supporting business cases would be particularly valuable.  If so, it will cap off a very worthwhile two days of M2M immersion in London.