07 July 2011
27 July 2010
Sensor and control networks are typically missing most of these building blocks. Designed to optimize response time, the short packets cannot easily accommodate the larger packet sizes associated with high security encryption.
Some controls networks, LONWORKS® for example, include an authentication mechanism, but in practice it is infrequently implemented because its use complicates key management in multi-vendor networks. Intrusion detection, for wired or wireless control networks, is typically not available, nor is firewalling or endpoint compliance – certainly not at the sensor/actuator level, and sometimes not even at the controller level.
Quick fixes to address these limitations are not easily incorporated because the protocols employed are often embedded inside microprocessors that lack the processing power and memory to support the necessary security algorithms, buffers, and certificates.
Fortunately most control networks today interface with an IP-based network for management, monitoring, and/or control. And it is at this interface that you can click the ruby slippers and apply proven security techniques like policy-enforcement firewalling to prevent the control network from launching Denial-of-Service (DoS) attacks or non-compliant devices from accessing the network.
If the control network is IP-based then the protective measures can be applied to the control devices themselves – if not, then protection can only be applied to data traversing the interface between the sensor/actuator network and the IT systems to which it is connected, i.e., the latter can be protected against the former. Either way, greater security will be obtained than if no protective measures were applied between the control devices and the network with which it is connected.
The range of available security features that may be applied depends on the control network architecture, and includes:
The protective measures afforded by these techniques can be applied prophylactically to reduce some or most of the control system’s vulnerabilities.
With regard to cost, if Wi-Fi based sensors and actuators are used, the protective measures built into the wireless LAN infrastructure can be applied at little or no additional expense. If IP-based sensors and actuators are used, there will be some incremental expense but the devices themselves will not have to replaced because they already have the essential building blocks for higher security in place. If a non-IP based control network is used then the benefits will vary.
The table below summarizes how the security features described above can be employed to enhance the security of commonly used in control networks (features specific to wireless networks are left blank when applied to wired control networks).
SCADA, smart grid, and energy management systems sit at the heart of industry and commerce. This blog series was intended to highlight that defending these systems against attack must become a high priority because you can't use what you can’t control.
The control networks on which these systems depend today have unintended vulnerabilities. These vulnerabilities can be corrected in whole, part, or not at all depending on the architecture and technology of the underlying network.
Consideration should be given to retrofitting security systems into existing IT infrastructure to address security concerns, removing control networks for which there are no corrective measures, and ensuring that any new control-related infrastructure is designed with protective measures built-in from the outset.
For more information on security solutions that you can apply today please visit Aruba's Web site.
Following a rise in the theft of payment card data, the Payment Card Industry (PCI) standards council was created by the top card brands to combat such crime. The resulting PCI Data Security Standard (DSS) defines mandatory security guidelines for use by all merchants and service providers that store, process and transmit cardholder data.
Wireless LAN security is a core component of these requirements. DSS v1.1 permitted the use of WEP encryption. Indeed, many retailers wanted to continue using the WEP devices they had already purchased, not because of the encryption scheme but to avoid the capital outlays required to replace WEP devices with higher security equivalents.
While WEP encryption is easily cracked, and was subsequently banned under DSS v1.2, an ingenious method was used to protect WEP devices so they could continue in service until DSS v1.2 was implemented. This solution protected the network without requiring any changes or clients added to the WEP devices. This solution holds great promise for the protection of SCADA, smart grid, and energy control systems.
Consider the humble bar code scanner. A workhorse of both point-of-sale (POS) and logistics systems, many scanners in use today rely on 802.11b/g Wi-Fi and WEP. Data from the scanners are passed via Wi-Fi to the enterprise network. If you crack WEP you therefore potentially open a back door into that network.
Integrating a stateful, role-based policy enforcement firewall into the wireless network slams shut this back door. By blacklisting unauthorized devices – not based on the port through which they entered the network but rather by the user and/or type of device - unauthorized users can be denied access to the rest of the network.
The firewall can distinguish between multiple classes of users, allowing one common network infrastructure to function as independent networks whose isolation is ensured by policy enforcement. Guest access is separate from POS which is separate from logistics, etc.
The elegance of this approach is that it can be retrofitted to existing networks – wired and wireless using a true overlay model - without any software clients or other changes to the devices being protected. It protects any devices from any manufacturers.
This same segmentation and policy enforcement scheme can be applied to wired and wireless sensors as soon as their data hit the IT infrastructure. Access rights, quality-of-service, bandwidth, VLANs – almost any parameter can be controlled and actively managed by the stateful, role-based policy enforcement firewall. It is to the benefits of this approach, used in conjunction with additional security enhancements, that we’ll turn in the next posting.
These direct wired systems were subsequently replaced with time or frequency division multiplex systems that allowed one common cable to be shared among multiple devices. Installation was simpler and less expensive, the controller was more complex and, as before, a central point of failure should its program fail to execute properly.
Next up were intelligent, distributed networks in which devices communicated directly with one another on a peer-to-peer basis, without the need for a central controller. Locally intelligent and able to communicate on shared communication medium with any other device on the network, these networks allowed reconfiguration of system functionality via software download over the network. Peer-to-peer communications allowed the direct exchange of information between any or all of the devices without intervention by any central device, eliminating the single point of failure issue.
Regardless of the specific architecture used, in all cases the objective of the control network was to deliver status information as quickly as possible to all devices that needed updates. The protocols we’re highly optimized for short control packets, and nary a bit was “wasted” on ancillary data or status.
The same optimization guidelines applied to the microcontrollers running the devices. To keep costs down and thereby allow the networks to be pervasively deployed down to the lowest cost sensor/actuator, processors were optimized for high throughput and processing short packets.
The popularity of IP connectivity spawned the development of IP-based control networks in which Ethernet or Wi-Fi forms a backbone for linking different sections of a control network. While controllers were the first devices to sit on an IP network, increasing numbers of native IP sensors and actuators are reaching the market.
Many IT departments prohibit the connection of any IP-based, control-related sensor/actuator, controller, gateway to their corporate networks out of concerns about network integrity and security. IT managers are legitimately concerned that the high offered traffic of control networks, some of which run at 100% channel utilization, will overwhelm their Ethernet networks and cause unintentional denial of services. Others are concerned that control networks, the security standards of which are rarely a high priority, could become unprotected back-doors into the corporate network.
What is rarely if ever discussed is how exposed the enterprise is to unauthorized manipulation of the control devices themselves. These systems control the power at the heart of every business and institution, and it is paramount that they be protected against unauthorized manipulation. It is to this point that we’ll return in the next installment of this series.
* * * * * * *
In the 1980s the proximity access card was introduced to the building security market. Until that time, gaining access to high security facilities – including many government agencies – required one to physically insert a magnetic stripe or Wiegand card into a reader.
Proximity card readers from Schlage, Sielox, Indala, and others overcame the inconvenience of swiping a card by using radio energy to sweep the area in front of the reader. Users needed only to place their wallet, purse, valise, or ID badge near a reader and the radio energy would be picked up by their proximity card.
A tuned circuit internal to the card would resonate when within range of the reader, generating a unique radio signature that would be captured and analyzed by the access control system. If the signature matched that of a valid card already programmed into the system, access would be granted. Simple, elegant, and convenient, proximity card systems quickly grew in popularity.
Problem was, this innovative technology had profound, unintended consequences. It allowed the surreptitious identification of people with access privileges to high security facilities. One could use radio energy to sweep a crowd of people and, by virtue of their proximity card, pick out persons of interest based on their signatures generated by their proximity cards. At a time when the Cold War was steamy hot and espionage was rampant, the proximity card was a new-found tool for adversaries.
The unintended consequences of a new technology are not usually discovered until after it's in use, sometimes widespread use, by which time available remediation options might be limited or very expensive. Such is the case with SCADA, smart grid, and energy management systems, which are now front and center in the effort to better manage energy consumption and lower greenhouse gases. Unintentionally vulnerable to manipulation and unauthorized access, these systems can literally turn out the lights, stopping a utility or enterprise cold in its tracks.
15 July 2010
One consequence of the flood of mobile devices is growing congestion on cellular data networks. Slow and dropped network connections are legion in large metropolitan areas like Beijing, New York, and San Francisco. Cellular data traffic is rising beyond sustainable network capacity, and there are no signs that it abate any time soon.
This problem is compounded by the challenge carriers face in obtaining acceptable ROI from their massive infrastructure investments. Value-added services like video help a carrier’s bottom line, but the more bandwidth-hungry video booms, the greater capacity is squeezed. Sticky new services and applications needed to secure customer loyalty only add to bandwidth woes.
One solution is to offload bandwidth-intensive multimedia traffic to nearby Wi-Fi networks, a process called “cellular offload.” In theory pushing traffic from overcrowded cellular networks onto high capacity, high-speed Wi-Fi networks should alleviate network congestion. The challenge for carriers is ensuring that bandwidth relief doesn’t come at the expense of the customer experience…or at the customer’s expense.
Cellular offload must be simple to initiate, the quality of service on Wi-Fi must be equal to or better than that offered on cellular, and there should not be cost penalties to the user. That’s a tall order. Many a manufacturer of metropolitan mesh Wi-Fi networks that has attempted cellular offload has failed.
Why? Because metro mesh networks were designed for e-mail and Web access, and not high-density, latency-sensitive data, voice, and video applications. Mesh technology is available that can handle these types of applications, Azalea Networks being a noted example, but metro mesh vendors have so fouled the market that customer resistance is high though not insurmountable.
Cost penalties are another concern. Some carriers, ATT among them, are trying to convince subscribers to pay twice for cellular offloading – once for cellular data service and once for a home Wi-Fi access point to handle traffic that the cellular network can’t. Even if the economics did work for a consumer, this stop-gap crumbles the moment users step foot outside their homes. A system-wide solution – not an ad hoc one – is the only way to address the dilemma.
A corollary to Parkinson’s Law says that data expands to fill all available bandwidth. So while some pundits say we’ll obtain bandwidth relief from 4G cellular (most studies say otherwise), those networks will attract applications that are even more bandwidth heavy.
What we need a commuter lane to handle network overspill and ensure that essential and urgent cellular traffic has the bandwidth it needs. Wi-Fi networks can be that path, if constructed correctly and with the right building blocks, and can do so at a price that is affordable to implement on a vast scale.
So let's stop blaming the rising popularity of Web-enabled smartphones and start focusing on using Wi-Fi to solve the problem.
(1) Dataquest Insight: PC Vendors' Move Into the Smartphone Market is Not Challenge Free
(2) Dataquest Insight: Factors Driving the Worldwide Enterprise Wireless LAN Market, 2005-2013
29 April 2010
Distracted by the commotion, the extraction proceeds unnoticed. That is until you next reach for your money only to find it's gone missing. Never to be seen again.
This week at Interop Cisco created such a diversion when it announced the availability of a new hardware-based spectrum analyzer. With features remarkably similar to Aruba's recently announced software-based spectrum analyzer - and using words so closely paired to Aruba's that a plagiarist would swoon - Cisco proclaimed that the world at last had a solution for dirty air. The secret: a new line of access points containing - drum roll, please - an embedded ASIC. Did that get your attention?
Now for the dip. In order to get this feature you have to replace your existing access points. If you want clean air everywhere then you have to replace all of the access points in your network. Every single one. Brilliant!
You've got to credit where credit is due. Project "CleanWallet" is really a double-dip - once for new APs and once for the 802.11n APs you only just purchased. Even the Artful Dodger would be impressed.
Silly sods, us. Instead of forcing customers to divvy up cash to replace their access points, our new software-based spectrum analyzer works with all Aruba 802.11n access points, including those already installed. Aruba's spectrum analyzer is feature rich, and includes Fast Fourier Analysis, spectrograms, interference classification, and programmable recording/playback.
We don't require any new hardware to make spectrum analysis work, and for customers using our Wireless Intrusion Prevention Module the feature comes for free. Aruba's 802.11n access points are already significantly less expensive than Cisco's, so the entire Wi-Fi system, including spectrum analysis, is easy on your wallet.
If Project "CleanWallet" isn't your thing, give us a call. We'll prove that you don't have to pay through the nose or sacrifice features to get clean air.
18 April 2010
Let's face it, forklift upgrades are driven by vendor greed. The worst offenders make no apologies for their inability and/or unwillingness to design upgradable products. It's just not in their DNA. Product design recapitulates corporate philosophy, to paraphrase Haeckel.
There are existence proofs that a forklift is not a mandatory prerogative to obtain a new feature - even one incorporating a profoundly complex new technology. Therefore a forklift-based strategy must originate in a forklift-oriented mentality.
Case in point - spectrum analysis.
Wi-Fi networks operate in environments containing electrical and radio frequency devices that can interfere with network communications. 2.4 GHz cordless phones, microwave ovens, wireless telemetry systems, and even adjacent Wi-Fi networks are all potential sources of interference. Interference sources can be either continuous or intermittent, the latter being the most difficult to isolate.
The task of identifying interference typically falls to a spectrum analyzer, the gold standard for isolating RF impediments. Spectrum analyzers help isolate packet transmission issues, over-the-air quality of service problems, and traffic congestion caused by contention with other devices operating in the same channel or band. They are an essential tool to ensure that networks run as they should.
To be effective the analyzer needs to be in the right place at the right time. The ideal solution is a spectrum analyzer that’s built into the wireless LAN infrastructure, and can examine the spectral composition of the RF environment anywhere in the Wi-Fi network, at any time. Today vendors offer handheld spectrum analyzers as well as ones that require the addition of spectrum analysis monitors (effectively doubling the total number of access points on site for full coverage).
Rumors are that at least one vendor will be offering new access points with integrated spectrum analysis. Consistent with their company policy, however, a forklift upgrade will be required to use it.Aruba has taken a completely different tack with spectrum analysis. Its recently introduced scientific-grade spectrum analyzer includes traditional tools such as Fast Fourier Transform (FFT), spectrograms, and interference source classification. It also includes powerful new features such as interference charts, channel quality measurement, and spectrum recording and playback.
Uniquely, the new spectrum analyzer works with all Aruba 802.11n access points, including those already in service. That is, a customer with an existing Aruba 802.11n deployment can enable spectrum analysis on any of their existing access points without adding any new hardware. None.
And the cost? Zero if you are already using Aruba's Wireless Intrusion Protection (WIPS) Module into which the new analyzer is integrated.
Why does Aruba introduce new features that expand the capabilities of its customers' already deployed networks? Why did it add distributed forwarding without a controller in the data path? E9-1-1 call positioning? Wired switch management?
Because adding features recapitulates our corporate commitment to value, driving growth by enhancing the utility of our customers' investments. It's a mutually beneficial arrangement, and one that stands in sharp contrast to a forklift mentality.
The next time you consider an IT vendor consider how they deliver innovative features. With a hand outstretched in partnership or reaching for your wallet.
02 April 2010
The issue is how to accomplish this with fewer available resources. To do this you have to get creative, and adversity catalyzes the process. It is the gap between available resources and demand that drives innovation, creativity, and opportunity.
In the words of J.C. Maxwell, “adversity motivates.” Maxwell’s "Benefits of Adversity" identifies the positive attributes of adversity:
1. Adversity creates resilience;
2. Adversity develops maturity;
3. Adversity pushes the envelope of accepted performance;
4. Adversity provides greater opportunities;
5. Adversity prompts innovation;
6. Adversity recaps unexpected benefits;
7. Adversity motivates.
The present downturn is no exception. IT managers face budget and headcount cuts, yet the companies for which they work cannot stop running. Leveraging investments in existing infrastructure, minimizing major new capital investments, and recouping savings from company operations are the new marching orders. If satisfying existing needs was good enough then the task at hand would be straightforward – weather the adverse economic climate by cutting as much spending and headcount as possible.
But in business it isn't that simple. The end of any downturn is followed by an uptick that will require increased IT services. Cut too far today and IT won’t be able to respond tomorrow. Business will suffer - again. IT managers must therefore be cognizant of the future and look at changes and cuts with an eye towards their impact on a future recovery.
This begs the question – is it possible to batten down the hatches to survive the current economic storm while laying the foundation for a future recovery? The answer is yes...but the challenge to doing so, surprisingly, is neither technological nor monetary but conceptual.
Doing more with less requires a new way of thinking about problems. In the IT world it means reconsidering the value of overbuilding complex, expensive infrastructure. In this market, in this economy, the first priorities need to be streamlining costs, boosting productivity, and enhancing efficiency.
A simple example will drive home the point. To lower costs, most enterprises are reducing their real estate footprints. Today 88% of employees work somewhere other than the corporate headquarters - many hotel in branch offices, work from home, or work on the road. The traditional way in which these remote users would be served is with a branch router. This paradigm might be acceptable for a large office but it's outrageously expensive for a branch of just a few people.
The challenge is how to network a large and growing remote workforce in an environment focused on cost reduction. It is here that adversity catalyzes innovation. By standing the problem on its head and saying the real issue is how we enable mobility at low cost for a large number of users - not how we connect a branch office - new, non-traditional solutions emerge.
To a router vendor every problem ends with a hardware-based solution - it is the proverbial key under the streetlight. Reconstituting the problem expands the area of illumination, revealing, for instance, that cloud-computing and virtualization are new options not previously considered.
Simply reframing a question can open a completely new set of solutions. Adversity forces the process by highlighting the inadequacy of the “old school” way of thinking and opening the door to innovative new solutions. Ones that focus on today's needs instead of yesterday's answers.
01 April 2010
In 1979 The Buggles released their debut single, 'Video Killed The Radio Star,' a nostalgic look at radio from the perspective of the video age that killed it.
Progress drives on, looking nostalgically in the rear view mirror from time to time, but propelled forward by the engine of our insatiable desire for something better.
Tube-based table radios are nostalgic. So are rotary phones, wooden plows, and iron clad ships. Doesn't mean we want to use them anymore. They were abandoned because something better came along. Something easier to use. Faster.Less expensive.
Technology transitions happen all the time in enterprise IT, but the branch office and fixed teleworker seem to have been neglected along the way. And what an oversight it was. Today more than 85% of employees work outside of the primary corporate campus. Yet they need - but haven't had - the same access to corporate network resources and applications as someone in the home office.
The solution cobbled together by router vendors was to remotely replicate the infrastructure that's on the corporate campus. That is, assemble a stack of appliances for security, VPN, Wi-Fi, routing - and then try to integrate them to work together.
Over time the separate appliances morphed into an integrated branch-in-a-box router. But experience showed that while you can morph a router from a hairball, but you can never take the hairball out of the router. From the user's point of view, the solution was little improved.
The fundamental problem is that the campus network and its branch offspring were designed assuming static users sitting behind protective firewalls. Mobility - mobile users specifically - breaks that model. You have to punch holes in firewalls, configure complex VLAN assignments for segmenting traffic and user types, install VPNs to protect roaming users. The list goes on and on. And grows more expensive, complex, and user unfriendly as it does.
Virtual Branch Networking (VBN) 1.0 was introduced in 2009 as a ground-up, mobility focused solution. VBN made it less expensive and simpler to securely connect remote users with the enterprise network at low cost and without changing the user experience.
VBN 2.0 goes one giant step farther by leveraging cloud services to do the job done by branch routers today - application acceleration, content security, remote access. Only it does so using a lower cost, more scalable solution that delivers a consistent user experience regardless of where you work: in the corporate HQ, in a branch office, from home, or on the road.
The cloud provides a massively scalable, economical way of delivering services and applications. It has changed the way we transfer data, download files, and use applications. When applied to branch networks, cloud services are the perfect tonic. They deliver essential business-critical services, without complexity, to widely distributed users at less than half the cost of the branch in-a-box router. This is one change you'll make and never, ever look back.
In my mind and in my branch,
We can't rewind it bought the ranch,
VBN killed the branch-in-a-box.
Read more about VBN 2.0 on-line.