Security Comes to the Forefront at IoT Security Conference 2015


Members of the VDC Team spent the last two days at the inaugural IoT Security event on the beautiful Boston waterfront, where Steve Hoffenberg, VDC’s Director of IoT & Embedded Technology, spoke alongside a diverse and distinguished panel of guests that included various leaders of government, research, and industry.


One of the main themes that emerged throughout the two-day conference was the growing importance and adoption of Security as a Service. If it makes more sense from both a financial and an operations perspective to outsource computing, storage, applications, and infrastructure to specialized providers in order to capitalize on economies of scale and aggregated outside expertise, then it follows that portions of IoT security can also be outsourced effectively.  As devices are connected to each other, and to the internet, the attack surface of the IoT software environment grows exponentially. Managing this complexity requires solutions that may be lacking in traditional embedded security software. We see a clear trend towards the addition of connected security features such as network data anomaly analysis and constant threat definition updates being built into device security at the OS level. The recently-announced Lynx & Webroot partnership is a clear example of how IoT security companies will be able to provide added value through reduced end-user complexity and enhanced safety to OEMs in the near future.


Another interesting thought came from Carl Stjernfeldt, Senior VP at Shell Venture Technologies, a division of the energy/oil giant. He suggested that Shell was looking to purchase many more sensors in the future, not only for machines, but also for “sensorizing” its people, blurring the line between inert and living assets and the data that could be collected from each. Of course, Shell is not the only company thinking of adding sensors to different production assets, including its human resources, but this comment did lead to the interesting question of how we might see a trend of convergence and growing complexity in the management of device and human directories and their corresponding authentication protocols, which are currently two separate worlds.


One more thought that we would like to leave with the reader is that of the continued overreliance on perimeter security: placing too much emphasis on stopping attackers from gaining any access to the system at all, and not enough emphasis on minimizing damage that could be done if an attacker gains access. In many cases, perimeter security may secure a device or a network extremely well from a technical standpoint, but a simple social hack, shortcut, or human error can render the entire system vulnerable quite easily. The principle of least privilege– properly assigning only necessary access privileges to each user and system element – is a core security principle that will be fundamental in implementing safety-critical IoT networks in the future. 


VDC's Steve Hoffenberg Speaking at IoT Security Conference in Boston


VDC's Director of IoT & Embedded Technology will be speaking at the IoT Security conference in Boston, September 22-23. He'll be hosting an Analyst Breakfast Briefing roundtable discussion on Wednesday, September 23, and also on that same day, he'll be participating as a panelist in the session entitled, "Maximizing Technology to Safeguard the Business of IoT."

Check out the full conference info at www.iotsecurityevent.com. If you plan to attend and would like to connect with Steve there, contact him at shoffenberg@vdcresearch.com.


More Tales from the Road - VDC at ESC Silicon Valley 2015

ESC Silicon Valley 2015

In case you missed it, VDC’s IoT & Embedded Technology was recently in Santa Clara for the 2015 Embedded Systems Conference – Silicon Valley. We had the opportunity to meet with and get updates from a number of companies, both at the show and several nearby corporate headquarters. Vendors we spoke with were pleased with the volume and quality of the attendees and many training sessions operated at full capacity.

As we have for the past decade, VDC Research presented the annual Embeddy Award to the organization judged to have announced the most significant advance in the embedded software and hardware industries at ESC. VDC created and named the Embeddy to highlight the most cutting-edge product or service for embedded software developers and system engineers.  We heard several compelling updates vendors in the RTOS, tools, and processor segments. Upon final assessment, LDRA’s Tool Suite 9.5 was selected as the best in show. LDRA provides software for automated code analysis and software testing to the safety-, mission-, security-, and business-critical markets. The Embeddy award was presented on stage in conjunction with the ACE Awards, given by EETimes and EDN. 

LDRA awarded 2015 Embeddy Award


In the 9.5 release, LDRA further extends performance of the LDRA tool suite by improving Linux support and by introducing a clear ‘Uniview’ function to help users visualize software components and development artifacts,” commented André Girard, VDC Research’s Senior Analyst of IoT & Embedded Technologies. “But since advances in functionality are less valuable if the solution is hard to use, we feel advances to both functionality and usability in this release of the LDRA tool suite are particularly important.

Overall, VDC was pleased with the level of excitement at the show and believe UBM has found a successful format. We expect the continuation of a consistent approach will lead to increased vendor participation in upcoming ESC events. 



Lingering Thoughts from NIWeek 2015

VDC’s IoT and Embedded Technologies team recently attended NIWeek 2015 in Austin, TX. National Instruments (NI) put on an excellent conference and we had the opportunity to take in a great deal. There were inspiring and informative keynote presentations, great partner stories, the heat, interesting panel sessions, helpful one-on-one meetings with NI executives, the strange layout of the Austin Convention Center (it allegedly has a floor 2, but I’m not buying that), demos on the exhibit floor…and, well, did I mention the heat?

The IoT / IIoT Centric Focus of NIWeek

Regardless of the format – keynote, panels, demos, 1:1’s – much of the discussion tied into the Internet of Things, or the Industrial Internet of Things in NI parlance. This focus is well justified; with all due respect to Marc Andreessen, it is time to update his famous quote. Today, “IoT is eating the world.” In fact, a majority of engineers surveyed by VDC in 2014 were already leveraging the IoT. By 2017, 81% expect to use the IoT in their projects, which represents a truly remarkable shift in the engineering world!

Iot eating the word

National Instruments’ Position within the IoT

NI’s IIoT focus, and I believe it to be the right one for the company, is to provide their customers with distributed compute intelligence that would sit between the data generating nodes and the cloud or legacy enterprise systems in the IIoT architecture.

To date, media attention has focused disproportionally on greenfield IoT applications serving the home, business, and building automation. There’s a lot of innovation to be excited about in these devices, but they represent only a slice of the total available market for the IoT. NI is aiming at this broader IoT picture that includes countless applications in all of the traditionally embedded industries, like automotive, energy, medical, industrial, and others. Deployments into these markets will be brownfield opportunities needing to traverse complex environments and interact with a host of existing devices that vary in age and capability. Moreover, any new equipment will need to connect or integrate with numerous earlier M2M systems.

At NIWeek 2015, National Instruments demonstrated that their modular, platform-based portfolio has the functional capabilities, flexibility, and strong hardware/software integration necessary to support engineering organizations as they deploy the next generation of intelligent IIoT systems. The challenge however, is for NI to broaden the mindset held by many traditional customers. Engineers will need to more often consider their platforms as appropriate for deployed systems rather than only for development and test & measurement if NI is to advance their positioning in the IIoT ecosystem.


IoT Use Cases for Enigma & Homomorphic Encryption


Homomorphic encryption is a method of encryption that allows computations to be performed upon fully encrypted data, generating an encrypted result that, after decryption, will match the result of the desired operations on the plaintext, decrypted data. In other words, homomorphic encryption allows a user to manipulate data without needing to decrypt it first.

Daniele Micciancio states the problem that is solved by homomorphic encryption in a 2010 journal article entitled A First Glimpse of Cryptography’s Holy Grail:

Using standard encryption technology we are immediately faced with a dilemma: either we store our data unencrypted and reveal our precious or sensitive data to the storage/ database service provider, or we encrypt it and make it impossible for the provider to operate on it.

If data is encrypted, then answering even a simple counting query (for example, the number of records or files that contain a certain keyword) would typically require downloading and decrypting the entire database content.

IBM has shown the most interest in the development of this space thus far, presumably to bolster the security of its burgeoning cloud business. In October 2013 it was granted a patent entitled Efficient implementation of fully homomorphic encryption, but the use cases for the patent technology were limited, and IBM has been silent on its implementation of the technology since then.



MIT Researchers Guy Zyskind and Oz Nathan, advised by Professor Alex “Sandy” Pentland, have recently announced a project dubbed Enigma that makes a major conceptual step towards this “Holy Grail” of a fully homomorphic encryption protocol. From the white paper's abstract:

A peer-to-peer network, enabling different parties to jointly store and run computations on data while keeping the data completely private. Enigma’s computational model is based on a highly optimized version of secure multi-party computation, guaranteed by a verifiable secret-sharing scheme. For storage, we use a modified distributed hashtable for holding secret-shared data. An external blockchain is utilized as the controller of the network, manages access control, identities and serves as a tamper-proof log of events. Security deposits and fees incentivize operation, correctness and fairness of the system. Similar to Bitcoin, Enigma removes the need for a trusted third party, enabling autonomous control of personal data. For the first time, users are able to share their data with cryptographic guarantees regarding their privacy.


Use Cases

If Enigma is implemented properly, it could have a sizable impact on the way that many companies in data-sensitive industries (such as healthcare, insurance, and finance) store and interact with their customer’s data.

Enigma’s major disadvantage comes in the form of increased time and power (money) to perform these computations as distributing and operating on encrypted data is more complex than computing over plaintext. Enigma makes computation across a large number of nodes much more efficient than previous methods of multi-party homomorphic encryption, but it is still at least 20x slower than plaintext computation.

Again, we are faced with the classic tradeoff between cost and security.


Enigma graph

"Simulated performance comparison of [Enigma's] optimized secure MPC [multi-party computation] variant compared to classical MPC." Source: Figure 4, Enigma Whitepaper.


There are currently a limited number of use-cases that we can conceptualize, but demand is likely to come from companies in industries with heavy government regulations regarding data privacy.

One use case would be for interactions between hospitals and health-care providers who store encrypted patient data as per HIPAA regulations, and the research & pharmaceutical companies that would benefit from access to this data for clinical analysis. Let us imagine that Hospital X is generating large amounts of sensitive medical data. Following industry best practices under HIPAA regulations, the hospital uses AES-256 to encrypt the data, and then stores it in the cloud. BigPharma, InsuranceCo, and University Y approach Hospital X, asking for permission to access and analyze the data.

Traditionally, Hospital X would have been required to first decrypt, then anonymize the data before granting access to a partner. Each of these additional steps is time consuming, and introduces complexity which increases the risk of compromising the data. With Enigma, Hospital X performs no operations on the data; it only decides whether or not to grant its partners access to the encrypted data.

Let us say that Hospital X grants University Y access to the encrypted data. Researchers from University Y specify the operations that they wish to perform on the data. Enigma then breaks the encrypted data into smaller chunks. Each chunk is processed by a separate computer, called a node. This method of problem solving is known as decentralized computing.

The benefits of decentralization are twofold: Firstly, if one node fails or aborts the computation prematurely, the other nodes can pick up and process the dropped computation. Secondly, if one node is compromised, the malicious agent will only have access to a meaningless portion of the data, and will not be able to reconstruct the entire dataset. As long as a majority of nodes are “good” (functioning and uncompromised), the computation remains flexible and secure. University Y obtains the final product without ever needing to access or handle Hospital X’s unencrypted data.

The scenario described above would be more expensive than simply trusting a third party compute solution, but it could be beneficial for a consumer-facing company’s reputation, or even mandated by the government as an addition to HIPAA or the Fair Credit Reporting Act (FCRA).



Zyskind and Nathan suggest that Enigma could be used to “store, manage and use (the highly sensitive) data collected by IoT devices in a decentralized, trustless cloud.” How exactly the concepts of homomorphic encryption and secure multi-party computation might play out in the IoT and embedded systems space remains to be seen, but it is an exciting development in an industry whose future is tied directly to advances in security and privacy techniques.

Needless to say, we at VDC Research will be keeping an eye on Enigma, as its source code and scripting language will be released near the end of the summer.



Does Windows 10 violate HIPAA?

Windows-10-LogoAccording to Microsoft's privacy statement for Windows 10 (https://www.microsoft.com/en-us/privacystatement/default.aspx), for the Input Personalization feature, "...your typed and handwritten words are collected to provide you a personalized user dictionary, help you type and write on your device with better character recognition, and provide you with text suggestions as you type or write. Typing data includes a sample of characters and words you type, which we scrub to remove IDs, IP addresses, and other potential identifiers."

Some observers have likened this feature to a keylogger, and it is turned on by default in Windows 10.

In addition, Windows 10 Input Personalization, "collect[s] your voice input, as well your name and nickname, your recent calendar events and the names of the people in your appointments, and information about your contacts including names and nicknames."

Now consider a worker at a hospital, healthcare company, or even a doctor's office using a Windows 10 PC to enter medical records data or simply schedule patient appointments. Does Microsoft's collection of typed text and other information constitute a breach of HIPAA (Health Insurance Portability and Accountability Act) privacy regulations? That depends on exactly how Microsoft is collecting the input.

  • Is the input scrubbed of personally identifiable information before or after it's sent to Microsoft (i.e. on the local PC or in Microsoft's servers)?
  • Is the input data encrypted before it's transmitted to Microsoft?
  • Is Microsoft storing the collected data?

And so on. Of course, IT administrators at hospitals and healthcare companies are likely to turn off the Input Personalization feature as well as a number of other privacy settings in Windows 10 (which reside in both Home and Pro editions). But many small private practices don't have IT administrators and might not realize what's going on in the operating system.

By having Input Personalization turned on by default, Microsoft has the responsibility to detail exactly how the feature might impact legally mandated data privacy. Thus far Microsoft has revealed little about how Windows 10's Input Personalization works. The company has some explaining to do.


PubNub Taps IoT Niche with Real Time Data Streams

The tremendous growth potential of the IoT has created a market battle between many large, well-known companies such as Amazon, Cisco, Google, IBM, Microsoft, and Oracle. But how do smaller companies and startups become competitive in the race for IoT success? One answer: create or exploit a niche within the IoT. PubNub is a notable entrant in this respect.
Pubnub logoStreaming of real time data is useful in a variety of IoT applications, including finance, weather, traffic, communication, E-commerce, security, systems control, home and vehicle automation, advertising, and gaming. Since PubNub's founding in 2009, the company has firmly established itself in the market and claims to be the only global-scale network for real time data streaming for web, mobile, and IoT devices.
PubNub founder and CEO Todd Greene told VDC that the company uses 14 datacenters worldwide, connecting nearly 300 million devices, and processing over 350 billion messages per month at 1/4 second or less latency. Over 2,000 customers are responsible for that immense amount of data traffic. Greene said that PubNub has been able acquire an abundance of customers because it supplies consistent solutions to overcome some of the IoT’s most daunting obstacles: lack of security/privacy, demanding resource requirements, and complexity of use.
Though the IoT is growing rapidly, some customers are still hesitant to adopt connected products and solutions because of recent concerns about cyber security (or lack thereof). In addition, developers struggle to design and maintain secure systems while being fully transparent with their customers about the security measures that they are taking. PubNub reduces security risks by eliminating open network ports (by tunneling data through HTTPS), supplying authentication and access to data at a granular level (from both the server and user sides), and encrypting data with AES 256.
Developers often have limited resources and are constrained in the amounts of data that they can use. Energy saving is also a necessity as portable and mobile devices and communications services expand their capabilities. The desire to maintain an open connection for data streaming may lead developers to expect that considerable bandwidth and energy are required, however, PubNub is optimized for low bandwidth usage and low battery drain. For example, only 15 to 17 kilobytes of data per day are needed for a device to maintain a persistent two-way network connection. To conserve battery power, PubNub has a keep-alive verification that only occurs every 5 minutes. A typical 60 second ping notification, commonly used by Apple (APNS) and Android (GCM) devices, causes heavier battery drain. PubNub can further reduce its energy use via multiplexing, which allows data to be aggregated and streamed from multiple PubNub channels simultaneously over one TCP socket connection.
Similar to data streaming services such as Pusher, PubNub lets developers easily create apps through APIs. Greene said that PubNub sets itself apart by supporting over 70 SDKs which allows it to handle almost every type of connected device and protocol, and cater to a broad range of users. And because it keeps a persistent socket connection, users do not have to hassle with configuring firewalls, proxy servers, antivirus, or resolving double NAT. This significantly reduces the cost and complexity of building and maintaining infrastructure for products and services while also offering easy scalability.
In a broad sense, PubNub’s services are similar to content delivery networks such as Akamai and Limelight, but PubNub focuses on real time IoT data streams with device presence detection. PubNub’s Greene summarizes the service with the term RAFTA, short for routing, augmentation, filtration, transportation and aggregation.
PubNub’s unique position and foothold in the IoT market give it the potential to expand and further monetize its business (which is based 100% on recurring revenues). The company has already developed services targeted at vertical market applications, such as fleet vehicle dispatch and home automation, and will be adding more soon. For OEMs or prospective business partners seeking IoT services, PubNub is a company to keep in mind.
This article was written by Rodshell Fleurinord, VDC Research Assistant, with Steve Hoffenberg, Director.


Microsoft Setting Precedents in Data Sovereignty and Residency

MSFT_logo_pngMicrosoft recently announced that the company will open two datacenters in Canada, to provide its Azure cloud service to the Canadian Government and businesses operating in that country. Kevin Turner, Microsoft’s chief operating officer, said “this substantial investment in a Canadian cloud demonstrates how committed we are to bringing even more opportunity to Canadian businesses and government organizations, helping them fully realize the cost savings and flexibility of the cloud.” (To read the full press release from Microsoft, see here.) In an article in Toronto’s Globe and Mail newspaper about the announcement, Janet Kennedy, president of Microsoft Canada, said, “there is no technical reason to do it.” The main reasons are data sovereignty and residency.

Data residency deals with where data is physically located and where it should not go without agreement from its owner. Data sovereignty focuses more on why and how a government should protect the data located within its jurisdiction, regardless of its ownership, from foreign government agencies.

These data issues have been hot topics both on personal and business levels, especially after the Edward Snowden incident. Since then, foreign government agencies and companies have tried to mitigate the risk of leaking their information. For example, the German Government terminated its contract with Verizon for Deutsche Telekom, shortly after the NSA’s reports regarding the agency’s spy acts were disclosed by Snowden. In the Canadian Government’s case, the government was not willing to store its sensitive information in the United States where it might be subject to investigation by the U.S. Government. Microsoft responded to the Canadian Government’s concern by proposing the new datacenter plan. (In 2014, Microsoft had launched a cloud service called Azure Government, dedicated to servicing the U.S. federal government via a datacenter isolated from the rest of the Azure network.) Although Microsoft is not the first or only cloud provider dealing with data sovereignty and residency issues, it has been thrust into the center of the debate.

With emergence of the cloud industry, physical borders between countries become porous, and in several instances governments have tried to subpoena data physically located in another country. One notable example is a U.S. Government court order for Microsoft to provide a customer’s emails and other data stored in Microsoft’s datacenter in Dublin, Ireland. The government’s argument is that there is no need for an American citizen to step on Irish territory to retrieve the data; a couple of keystrokes is all it would take. Microsoft, on the other hand, believes that electronic access to the datacenter should be considered as entering Irish territory, since the actual data is located in Dublin. The company has yet to provide the data and is appealing the court’s decision.

Brad Smith, Microsoft’s General Counsel and Executive Vice President of Legal and Corporate Affairs, has been addressing the conflict in the Microsoft on the Issues blog. Smith argues that Microsoft will not ignore the opinions of the 96 percent of the global population outside the United States.

More than 20 tech companies such as Apple and Cisco, as well as various interested organizations, have provided amicus briefs in support of Microsoft’s position in the case. The Irish Government also expressed its support towards Microsoft; it insists that it would cooperate with the United States to facilitate the process, but the United States should not be bypassing regulations that are currently in place.

Trying to avoid potential disputes and to protect data, some countries have established regulations preventing data from not only being subpoenaed, but also being accessed and distributed to another country without consent. The European Union is in the process of finalizing its General Data Protection Regulation which, among other things, will limit exporting of personal data and ask every global organization based in Europe to appoint a data protection officer. (Countries outside the European Union with data residency restrictions include Argentina, Australia, China, Mexico, New Zealand, and Russia.)

Recently, Microsoft started providing statistics on law enforcement requests, thanks to the USA Freedom Act, just enacted on June 2, 2015. In a report to be published every six months, Microsoft informs readers of its “principles in responding to government legal demands for customer data”:

  • “[Microsoft] require[s] a valid subpoena or legal equivalent before [it] consider[s] releasing a customer’s non-content data to law enforcement;”
  • “[Microsoft] require[s] a court order or warrant before [it] consider[s] releasing a customer’s content data;”
  • “In each instance, [it] carefully examine[s] the requests [it] receive[s] for a customer’s information to make sure they are in accord with the laws, rules and procedures that apply.”

In the second half of 2014, data from 52,997 accounts were requested by law enforcement agencies around the globe in a total of 31,002 requests. Only 7.55% of the requests were rejected outright by Microsoft, and the company disclosed the data contents of 3.36% of the accounts requested. (In the majority of requests, Microsoft only disclosed subscriber or transaction information, not account contents. See the full report from Microsoft here.)

Microsoft is trying its best to protect itself and the cloud industry by setting a precedent with the Dublin case. Nevertheless, even if multiple countries are focusing efforts on preventing their own businesses from suffering data-related controversies, cloud service users and providers should not disregard these issues. As the cloud industry and the IoT grow, the data generation rate is going to increase exponentially. All businesses using cloud services now must consider data residency and sovereignty, in addition to data privacy and security.

This blog post was researched and written by Se Jin Park, VDC Research Assistant (with Steve Hoffenberg)


Privacy and Security Trends in IoT – Parsing the FTC’s Guidance

Privacy and security are both huge concerns for consumers and businesses alike in the evolving IoT landscape. Privacy is the unauthorized use of data by an entity that has been granted access to a dataset. Thus it is generally privacy that forms the relationship between companies and customers, and any breach of this contract is a privacy concern. Security, on the other hand, is the unauthorized use and or/access of data by an entity that has not been granted access to some dataset; e.g. hacking and external security breaches. Both privacy and security goals will be hard to reconcile with the main aim of IoT development: monitoring, collecting, analyzing, and using massive amounts of data.

Whose job is it to protect sensitive data in these rapidly-growing IoT industries? Responsibilities for data privacy and security vary by industry and by country. In the US, when companies are not regulated by another agency (e.g. the Department of Health and Human Services for HIPAA rules on medical patient data), this responsibility usually falls under the jurisdiction of the Federal Trade Commission (FTC).




The FTC has conflicting interests to balance. The Commission was created in 1914 in order to break up the increasingly-powerful corporations that controlled the oil, steel, and tobacco industries with the end goal of protecting consumers from “unfair or competitive practices”. Conversely, the FTC must avoid “unduly burdening legitimate business activity.” The FTC walks a fine line between social and moral conservatism, and economic progress.

As with the majority of emerging and semi-defined technologies, the US government has been largely content to let the market shape the course of the IoT Services market development. Yet the steadily-growing stream of privacy concerns (Snapchat, NSA, Google, Facebook, etc.) and security concerns (Anthem, Blue Cross, Target, Adobe, LastPass, the Office of Personnel of the US Government) has made it clear that the FTC will need to make its presence felt in the IoT Services market sooner rather than later. It is quite apparent that many entities simply do not have the proper incentive to thoroughly self-regulate with regards to privacy and security. Data regulation is in its infancy and it will undoubtedly be a daunting task.



The FTC published corporate guidance on privacy and security practices earlier this year. Let us parse this document to see if we can elucidate any key findings and conclusions. It is important to keep in mind that none of these recommendations carry the weight of law; the report simply “summarizes the workshop and provides staff’s recommendations in this [IoT] area.”



The FTC makes six main security recommendations in order to prevent unauthorized breaches of data. Companies should:

  1. Plan by building security into devices “at the outset, rather than as an afterthought”
  2. Train “all employees about good security, and ensure that security issues are addressed at the appropriate level of responsibility within the organization”
  3. Hire only service providers that can maintain “reasonable security and provide reasonable oversight for these service providers”
  4. Layer by implementing “security measures at several levels”
  5. Protect by “implementing reasonable access control measures to limit the ability of an unauthorized person to access a consumer’s device, data, or even the consumer’s network”
  6. Monitor and fix, patching known vulnerabilities throughout the product lifecycle “to the extent feasible.”

This is the full extent of the security recommendations. These are all common practice in industry, and the vague nature of the language adds little value to the discussion of how the FTC specifically might regulate data in the IoT market.


Data Collection & Privacy

In the privacy section of the FTC report, the agency recommends that companies minimize the amount of data they collect, but the recommendation is quite flexible, giving companies the option to collect potentially useful data with consumer consent. But how does a company obtain consent when the device or service has no interface, as will be the case with many embedded devices employed in the IoT market?

According to the FTC, as long as the use of the data is “expected” and “consistent with the context of the interaction” a company need not explicitly obtain consent to collect data. This language does not set any standards; rather it is remedial language that can be applied to different situations post-incident. The FTC couples this expected use language with industry-specific legislation, such as the Fair Credit Reporting Act, which restricts the usage of credit data in certain circumstances. In summary, under these recommendations the company has nearly full discretion in the collection and usage of data as long as it can prove that it is using the data in an “expected” manner relative to the nature and context of its relation with its patron (barring any industry-specific legislation).

The report notes an interesting idea proposed by MIT Professor Hal Abelson. He suggests that data be “tagged” upon collection with appropriate uses so that another software could identify and flag and inappropriate uses, providing a layer of protection and forcing the company to think about how to use the data before collecting it. We expressed a similar view in a recent VDC View document entitled “Beyond ‘Who Owns the Data?’,” suggesting that IoT vendors develop and implement data structures to permit highly flexible assignments of data access right and usage permissions. Tagging would certainly be one way to segregate usage rights and protect different streams of data.



The FTC states that any legislation concerning the IoT would be premature at this point. However, staff recommends that Congress should enact “general data security legislation” and “basic privacy protections” which it cannot mandate itself. Basically, the FTC needs a new legislative base from which to launch lawsuits. Congress created an IoT Caucus shortly after the filing of this FTC report, but it has been mostly silent since its inception.



FTC Commissioner Joshua Wright

Perhaps the most interesting part of this report comes in the form of a dissent by one of the 5 commissioners (leaders) of the FTC. Commissioner Joshua Wright notes that the FTC generally issues two types of reports: 1) an in-depth and impactful report commissioned by Congress that compels private parties to submit data to the FTC for analysis and review; or 2) a slightly less formal report that details and makes public any workshops conducted by the Commission, concluding with recommendations that are supported by substantial data and analysis.

Wright contends that this FTC report does not fit either of these categories, and goes on to shred the report to pieces. Firstly, he argues, the IoT is a nascent and far-ranging concept – a one-day workshop cannot generate a sufficient sample of ideas or range of views in order to support any policy recommendation. Secondly, he observes that the report “does not perform any actual analysis,” instead merely relying on its own assertions without qualification or economic backing. He goes as far as to say that the report merely pays “lip service” to a few obvious facts without actually performing any analysis. Thirdly, he remains unconvinced that the Fair Information Practice Principles (FIPP) is a proper concept to apply to the IoT, favoring instead “the well-established Commission view companies must maintain reasonable and appropriate security measures; that inquiry necessitates a cost-benefit analysis. The most significant drawback of the concepts of ‘security by design’ and other privacy-related catchphrases is that they do not appear to contain any meaningful analytical content.” Commissioner Wright clearly has a large bone to pick with the method by which the FTC is considering data regulation in the IoT market.



Corporations and consumers alike in the IoT market would do well to pay attention to the following conclusions that we can draw from the FTC document and Commissioner Wright’s dissent:

  1. Congress has not yet created a legislative base upon which the FTC can clearly pursue judicial remedies for breaches of privacy specific to the IoT market (barring specific acts such as the Fair Credit Reporting Act).
  2. Even if the legislation were in place, the FTC has not performed a proper cost-benefit analysis of the potential impact of privacy and security breaches within the IoT market, thus it cannot recommend clear, data-backed, corporate guidance at this time.
  3. The FTC clearly recognizes the “profound impact” that the IoT will have on consumers, and is looking into regulation, but is internally conflicted about how to move forward.
  4. The report does not introduce any new incentives for companies to better safeguard customer data or to implement less-intrusive privacy contracts, so we can expect to see continued growth in data collection in the IoT market in line with VDC’s forecasts.
  5. Consumer-facing companies that wish to differentiate themselves from competitors would do well to safeguard their data; we may very well see security breaches as the norm in the near future, so a company with a clean history will have an advantage in the market. See VDC’s series of reports on Security and the IoT for deeper analysis of security issues.


Samsung Invests in Sigfox. Is the Race Over for Long-Range Low-Power Wireless Competitors?

A preliminary market battle has been brewing over the past year between technologies to connect IoT devices via wireless wide area networks. These cellular-type networks allow very low power battery devices to transmit small amounts of data over several miles, a solution highly suitable to many types of IoT devices such weather sensors and smart meters. Entrants in this market include Sigfox, LoRa, and Neul. (In addition, standards organization IEEE is developing the 802.11ah wireless networking protocol for distances up to a kilometer.)

Logo-sigfoxSigfox announced on June 15 that Samsung’s Artik IoT platform would integrate Sigfox support. Also, noted in the press release, but given less attention, was that Samsung’s venture capital arm is investing in Sigfox. The size of the investment was not disclosed. (See Sigfox press release here.) In February of 2015, Sigfox announced that it had secured from a variety of venture capital firms an investment round totaling $115M, reportedly the largest single VC investment round ever in France, Sigfox’s home country. (See Sigfox press release here .)

Thus far, Sigfox has been the only long-range low-power wireless solution already deployed in commercial operations, with several hundred thousand devices connected. It has networks in place in France, as well as in Spain, Portugal, the Netherlands, parts of the UK, and a number of cities around the world, most recently, in the San Francisco Bay area of the US.

Lora logoLoRa—developed by Semtech—has the backing of IBM, Cisco, and Microchip among the members of the LoRa Alliance, and its initial deployments are imminent.

Neul-logoUK-based Neul is still in its demonstration phase, but the company was acquired for 15M British Pounds in September 2014 by Chinese telecommunications equipment giant Huawei.

VDC won’t attempt here to compare the relative technical merits of these long-range low-power wireless systems, but from a market standpoint, it is clear that Sigfox is leading the pack. And it’s tempting to think that an investment by Samsung will propel Sigfox into an insurmountable lead. But we’re not yet ready to draw that conclusion. Some points for consideration:

  • Although the Samsung name will undoubtedly give a significant shot in the arm to Sigfox’s marketing efforts, without knowing the size of Samsung’s investment, we can’t assess the extent of its impact on the ability of Sigfox to get its networks deployed more broadly.
  • Long-range wireless solutions face the chicken-and-egg problem of needing the network infrastructure (antennas and backhaul) in place to persuade manufacturers to develop products using the technology, while needing products coming to market to warrant investment in the infrastructure.
  • As one of the world’s largest makers of electronic products, Samsung has the potential to dramatically increase availability of Sigfox-compatible devices if it so chooses. Thus far, however, Samsung hasn’t committed to using Sigfox in anything other than its Artik IoT platform.
  • Samsung also makes cellular networking equipment, although that represents a relatively small part of its overall business. (Samsung does not publicly disclose revenue for the segment.) By contrast, two-thirds of Huawei’s entire business ($31B out of $46B in 2014) is derived from cellular networking equipment, mostly sold in China and the EMEA region. While either company could conceivably foster widespread installation of long-range low-power networks through technological investment and pricing strategies, it’s unclear which would have greater motivation to do so.
  • LoRa has some heavyweight backers as members of its Alliance, but such membership has not yet yielded investment that will produce meaningful numbers of either chickens or eggs. [Note: the day after this blog was posted, the competition has ramped up, as LoRa startup Actility announced that it had received a $25M round of VC funding led by Ginko Ventures, with participants including telcos KPN, Orange, and Swisscom, as well as Foxconn, the world's largest contract manufacturer. See Actility press release here.]

In the meantime, Samsung’s investment positions Sigfox with a larger lead in the race for long-range low-power wireless networks. But it’s a long way to the finish line.


Recent Posts

Security Comes to the Forefront at IoT Security Conference 2015

VDC's Steve Hoffenberg Speaking at IoT Security Conference in Boston

More Tales from the Road - VDC at ESC Silicon Valley 2015

Lingering Thoughts from NIWeek 2015

IoT Use Cases for Enigma & Homomorphic Encryption

Does Windows 10 violate HIPAA?

PubNub Taps IoT Niche with Real Time Data Streams

Microsoft Setting Precedents in Data Sovereignty and Residency

Privacy and Security Trends in IoT – Parsing the FTC’s Guidance

Samsung Invests in Sigfox. Is the Race Over for Long-Range Low-Power Wireless Competitors?

Related Posts Plugin for WordPress, Blogger...