27 July 2010

Why SCADA Networks Are Vulnerable To Attack - Part 4: Controlling What You Use

Security doesn’t happen by accident – it must be built into or added to a network. Some of the key security building blocks for wired and wireless networks include encryption, authentication, intrusion detection, controlled access to network resources, and wireless airtime and bandwidth control.

Sensor and control networks are typically missing most of these building blocks. Designed to optimize response time, the short packets cannot easily accommodate the larger packet sizes associated with high security encryption.

Some controls networks, LONWORKS® for example, include an authentication mechanism, but in practice it is infrequently implemented because its use complicates key management in multi-vendor networks. Intrusion detection, for wired or wireless control networks, is typically not available, nor is firewalling or endpoint compliance – certainly not at the sensor/actuator level, and sometimes not even at the controller level.

Quick fixes to address these limitations are not easily incorporated because the protocols employed are often embedded inside microprocessors that lack the processing power and memory to support the necessary security algorithms, buffers, and certificates.

Fortunately most control networks today interface with an IP-based network for management, monitoring, and/or control. And it is at this interface that you can click the ruby slippers and apply proven security techniques like policy-enforcement firewalling to
prevent the control network from launching Denial-of-Service (DoS) attacks or non-compliant devices from accessing the network.

If the control network is IP-based then the protective measures can be applied to the control devices themselves – if not, then protection can only be applied to data traversing the interface between the sensor/actuator network and the IT systems to which it is connected, i.e., the latter can be protected against the former. Either way, greater security will be obtained than if no protective measures were applied between the control devices and the network with which it is connected.

The range of available security features that may be applied depends on the control network architecture, and includes:


The protective measures afforded by these techniques can be applied prophylactically to reduce some or most of the control system’s vulnerabilities.

With regard to cost, if Wi-Fi based sensors and actuators are used, the protective measures built into the wireless LAN infrastructure can be applied at little or no additional expense. If IP-based sensors and actuators are used, there will be some incremental expense but the devices themselves will not have to replaced because they already have the essential building blocks for higher security in place. If a non-IP based control network is used then the benefits will vary.

The table below summarizes how the security features described above can be employed to enhance the security of commonly used in control networks (features specific to wireless networks are left blank when applied to wired control networks)
.

Conclusion

SCADA, smart grid, and energy management systems sit at the heart of industry and commerce.
This blog series was intended to highlight that defending these systems against attack must become a high priority because you can't use what you can’t control.

The control networks on which these systems depend today have unintended vulnerabilities.
These vulnerabilities can be corrected in whole, part, or not at all depending on the architecture and technology of the underlying network.

Consideration should be given to retrofitting security systems into existing IT infrastructure to address security concerns, removing control networks for which there are no corrective measures, and ensuring that any new control-related infrastructure is designed with protective measures built-in from the outset.

For more information on security solutions that you can apply today please visit Aruba's Web site.

Why SCADA Networks Are Vulnerable To Attack - Part 3: Firewall Both Users AND Devices


Following a rise in the theft of payment card data, the Payment Card Industry (PCI) standards council was created by the top card brands to combat such crime. The resulting PCI Data Security Standard (DSS) defines mandatory security guidelines for use by all merchants and service providers that store, process and transmit cardholder data.

Wireless LAN security is a core component of these requirements. DSS v1.1 permitted the use of WEP encryption. Indeed, many retailers wanted to continue using the WEP devices they had already purchased, not because of the encryption scheme but to avoid the capital outlays required to replace WEP devices with higher security equivalents.

While WEP encryption is easily cracked, and was subsequently banned under DSS v1.2, an ingenious method was used to protect WEP devices so they could continue in service until DSS v1.2 was implemented. This solution protected the network without requiring any changes or clients added to the WEP devices. This solution holds great promise for the protection of SCADA, smart grid, and energy control systems.


Consider the humble bar code scanner. A workhorse of both point-of-sale (POS) and logistics systems, many scanners in use today rely on 802.11b/g Wi-Fi and WEP. Data from the scanners are passed via Wi-Fi to the enterprise network. If you crack WEP you therefore potentially open a back door into that network.

Integrating a stateful, role-based policy enforcement firewall into the wireless network slams shut this back door. By blacklisting unauthorized devices – not based on the port through which they entered the network but rather by the user and/or type of device - unauthorized users can be denied access to the rest of the network.

The firewall can distinguish between multiple classes of users, allowing one common network infrastructure to function as independent networks whose isolation is ensured by policy enforcement. Guest access is separate from POS which is separate from logistics, etc.


The elegance of this approach is that it can be retrofitted to existing networks – wired and wireless using a true overlay model - without any software clients or other changes to the devices being protected. It protects any devices from any manufacturers.

This same segmentation and policy enforcement scheme can be applied to wired and wireless sensors as soon as their data hit the IT infrastructure. Access rights, quality-of-service, bandwidth, VLANs – almost any parameter can be controlled and actively managed by the stateful, role-based policy enforcement firewall. It is to the benefits of this approach, used in conjunction with additional security enhancements, that we’ll turn in the next posting.

Why SCADA Networks Are Vulnerable To Attack - Part 2: The Weakest Link

In the beginning, there was cabling - lots of cabling. Every sensor, actuator, and display was connected by a separate cable that grew like a hydra from a controller, the brains of a traditional control system. If a solenoid needed to be triggered in response to the activation of a limit switch then the signal traveled from the limit switch, through cabling to the controller, which processed the information and sent a command to the solenoid over yet another cable.

These direct wired systems were subsequently replaced with time or frequency division multiplex systems that allowed one common cable to be shared among multiple devices. Installation was simpler and less expensive, the controller was more complex and, as before, a central point of failure should its program fail to execute properly.


Next up were intelligent, distributed networks in which devices communicated directly with one another on a peer-to-peer basis, without the need for a central controller. Locally intelligent and able to communicate on shared communication medium with any other device on the network, these networks allowed reconfiguration of system functionality via software download over the network. Peer-to-peer communications allowed the direct exchange of information between any or all of the devices without intervention by any central device, eliminating the single point of failure issue.


Regardless of the specific architecture used, in all cases the objective of the control network was to deliver status information as quickly as possible to all devices that needed updates. The protocols we’re highly optimized for short control packets, and nary a bit was “wasted” on ancillary data or status.


The same optimization guidelines applied to the microcontrollers running the devices. To keep costs down and thereby allow the networks to be pervasively deployed down to the lowest cost sensor/actuator, processors were optimized for high throughput and processing short packets.


The popularity of IP connectivity spawned the development of IP-based control networks in which Ethernet or Wi-Fi forms a backbone for linking different sections of a control network. While controllers were the first devices to sit on an IP network, increasing numbers of native IP sensors and actuators are reaching the market.


Many IT departments prohibit the connection of any IP-based, control-related sensor/actuator, controller, gateway to their corporate networks out of concerns about network integrity and security. IT managers are legitimately concerned that the high offered traffic of control networks, some of which run at 100% channel utilization, will overwhelm their Ethernet networks and cause unintentional denial of services. Others are concerned that control networks, the security standards of which are rarely a high priority, could become unprotected back-doors into the corporate network.


What is rarely if ever discussed is how exposed the enterprise is to unauthorized manipulation of the control devices themselves. These systems control the power at the heart of every business and institution, and it is paramount that they be protected against unauthorized manipulation. It is to this point that we’ll return in the next installment of this series.

Why SCADA Networks Are Vulnerable To Attack - Part 1: Unintended Consequences

This multi-part series discusses the security vulnerabilities of the sensor/actuator controls at the heart of SCADA, smart grid and energy management systems, and proposes a means of containing, if not fully addressing, the limitations of these systems.

* * * * * * *

In the 1980s the proximity access card was introduced to the building security market. Until that time, gaining access to high security facilities – including many government agencies – required one to physically insert a magnetic stripe or Wiegand card into a reader.

Proximity card readers from Schlage, Sielox, Indala, and others overcame the inconvenience of swiping a card by using radio energy to sweep the area in front of the reader.
Users needed only to place their wallet, purse, valise, or ID badge near a reader and the radio energy would be picked up by their proximity card.

A tuned circuit internal to the card would resonate when within range of the reader, generating a unique radio signature that would be captured and analyzed by the access control system. If the signature matched that of a valid card already programmed into the system, access would be granted. Simple, elegant, and convenient, proximity card systems quickly grew in popularity.


Problem was, this innovative technology had profound, unintended consequences. It allowed the surreptitious identification of people with access privileges to high security facilities. One could use radio energy to sweep a crowd of people and, by virtue of their proximity card, pick out persons of interest based on their signatures generated by their proximity cards. At a time when the Cold War was steamy hot and espionage was rampant, the proximity card was a new-found tool for adversaries.


The unintended consequences of a new technology are not usually discovered until after it's in use, sometimes widespread use, by which time available remediation options might be limited or very expensive. Such is the case with SCADA, smart grid, and energy management systems, which are now front and center in the effort to better manage energy consumption and lower greenhouse gases. Unintentionally vulnerable to manipulation and unauthorized access, these systems can literally turn out the lights, stopping a utility or enterprise cold in its tracks.

(
Photo: www.brightsecuritygroup.com)

15 July 2010

Is there a role for Wi-Fi in offloading traffic from cellular networks?

We are today witnessing a mobile device boom driven by distributed workforces that need secure anywhere-connectivity, and consumers who want always-on Internet access. Smartphone sales grew 29% year-over-year in 2009 to surpass notebook sales (1), and dual-mode (Wi-Fi/cellular) phones and smartphones will more than double from 2008-2013 to 130.9 million units (2).

One consequence of the flood of mobile devices is growing congestion on cellular data networks. Slow and dropped network connections are legion in large metropolitan areas like Beijing, New York, and San Francisco. Cellular data traffic is rising beyond sustainable network capacity, and there are no signs that it abate any time soon.


This problem is compounded by the challenge carriers face in obtaining acceptable ROI from their massive infrastructure investments. Value-added services like video help a carrier’s bottom line, but the more bandwidth-hungry video booms, the greater capacity is squeezed. Sticky new services and applications needed to secure customer loyalty only add to bandwidth woes.


One solution is to offload bandwidth-intensive multimedia traffic to nearby Wi-Fi networks, a process called “cellular offload.” In theory pushing traffic from overcrowded cellular networks onto high capacity, high-speed Wi-Fi networks should alleviate network congestion. The challenge for carriers is ensuring that bandwidth relief doesn’t come at the expense of the customer experience…or at the customer’s expense.


Cellular offload must be simple to initiate, the quality of service on Wi-Fi must be equal to or better than that offered on cellular, and there should not be cost penalties to the user. That’s a tall order. Many a manufacturer of metropolitan mesh Wi-Fi networks that has attempted cellular offload has failed.


Why? Because metro mesh networks were designed for e-mail and Web access, and not high-density, latency-sensitive data, voice, and video applications. Mesh technology is available that can handle these types of applications, Azalea Networks being a noted example, but metro mesh vendors have so fouled the market that customer resistance is high though not insurmountable.


Cost penalties are another concern. Some carriers, ATT among them, are trying to convince subscribers to pay twice for cellular offloading – once for cellular data service and once for a home Wi-Fi access point to handle traffic that the cellular network can’t. Even if the economics did work for a consumer, this stop-gap crumbles the moment users step foot outside their homes. A system-wide solution – not an ad hoc one – is the only way to address the dilemma.


A corollary to Parkinson’s Law says that data expands to fill all available bandwidth. So while some pundits say we’ll obtain bandwidth relief from 4G cellular (most studies say otherwise), those networks will attract applications that are even more bandwidth heavy.

What we need a commuter lane to handle network overspill and ensure that essential and urgent cellular traffic has the bandwidth it needs. Wi-Fi networks can be that path, if constructed correctly and with the right building blocks, and can do so at a price that is affordable to implement on a vast scale.


So let's stop blaming the rising popularity of Web-enabled smartphones and start focusing on using Wi-Fi to solve the problem.


(1) Dataquest Insight: PC Vendors' Move Into the Smartphone Market is Not Challenge Free

(2) Dataquest Insight: Factors Driving the Worldwide Enterprise Wireless LAN Market, 2005-2013


29 April 2010

Project "CleanWallet": The Newest Way To Separate Wi-Fi Customers From Their Money

The best pickpockets create a diversion before they dip and run. They'll bump into you, drop an object nearby, or yell something to catch your attention.

Distracted by the commotion, the extraction proceeds unnoticed. That is until you next reach for your money only to find it's gone missing. Never to be seen again.

This week at Interop Cisco created such a diversion when it announced the availability of a new hardware-based spectrum analyzer. With features remarkably similar to Aruba's recently announced software-based spectrum analyzer - and using words so closely paired to Aruba's that a plagiarist would swoon - Cisco proclaimed that the world at last had a solution for dirty air. The secret: a new line of access points containing - drum roll, please - an embedded ASIC. Did that get your attention?

Now for the dip. In order to get this feature you have to replace your existing access points. If you want clean air everywhere then you have to replace all of the access points in your network. Every single one.
Brilliant!

You've got to credit where credit is due. Project "CleanWallet" is really a double-dip - once for new APs and once for the 802.11n APs you only just purchased.
Even the Artful Dodger would be impressed.

Silly sods, us. Instead of forcing customers to divvy up cash to replace their access points, our new software-based spectrum analyzer works with all Aruba 802.11n access points, including those already installed.
Aruba's spectrum analyzer is feature rich, and includes Fast Fourier Analysis, spectrograms, interference classification, and programmable recording/playback.

We don't require any new hardware to make spectrum analysis work, and for customers using our Wireless Intrusion Prevention Module the feature comes for free. Aruba's 802.11n access points are already significantly less expensive than Cisco's, so the entire Wi-Fi system, including spectrum analysis, is easy on your wallet.

If Project "CleanWallet" isn't your thing, give us a call. We'll prove that
you don't have to pay through the nose or sacrifice features to get clean air.

18 April 2010

Innovation Shouldn't Have To Be Delivered By Forklift

Ever notice how the latest and greatest innovation from some vendors invariably requires replacing the equipment you've already installed? Known as a "forklift" upgrade, these swap-outs benefit the vendor at the expense of the customer's time and money.

Let's face it, forklift upgrades are driven by vendor greed. The worst offenders make no apologies for their inability and/or unwillingness to design upgradable products. It's just not in their DNA.
Product design recapitulates corporate philosophy, to paraphrase Haeckel.

There are existence proofs that a forklift is not a mandatory prerogative to obtain a new feature - even one incorporating a profoundly complex new technology. Therefore a forklift-based strategy must originate in a forklift-oriented mentality.


Case in point - spectrum analysis.


Wi-Fi networks operate in environments containing electrical and radio frequency devices that can interfere with network communications. 2.4 GHz cordless phones, microwave ovens, wireless telemetry systems, and even adjacent Wi-Fi networks are all potential sources of interference. Interference sources can be either continuous or intermittent, the latter being the most difficult to isolate.

The task of identifying interference typically falls to a spectrum analyzer, the gold standard for isolating RF impediments. Spectrum analyzers help isolate packet transmission issues, over-the-air quality of service problems, and traffic congestion caused by contention with other devices operating in the same channel or band. They are an essential tool to ensure that networks run as they should.

To be effective the analyzer needs to be in the right place at the right time. The ideal solution is a spectrum analyzer that’s built into the wireless LAN infrastructure, and can examine the spectral composition of the RF environment anywhere in the Wi-Fi network, at any time. Today vendors offer handheld spectrum analyzers as well as ones that require the addition of spectrum analysis monitors (effectively doubling the total number of access points on site for full coverage).

Rumors are that at least one vendor will be offering new access points with integrated spectrum analysis. Consistent with their company policy, however, a forklift upgrade will be required to use it.

Aruba has taken a completely different tack with spectrum analysis. Its recently introduced scientific-grade spectrum analyzer includes traditional tools such as Fast Fourier Transform (FFT), spectrograms, and interference source classification. It also includes powerful new features such as interference charts, channel quality measurement, and spectrum recording and playback.

Uniquely, the new spectrum analyzer works with all Aruba 802.11n access points, including those already in service. That is, a customer with an existing Aruba 802.11n deployment can enable spectrum analysis on any of their existing access points without adding any new hardware. None.

And the cost? Zero if you are already using Aruba's Wireless Intrusion Protection (WIPS) Module into which the new analyzer is integrated.

Why does Aruba introduce new features that expand the capabilities of its customers' already deployed networks? Why did it add distributed forwarding without a controller in the data path? E9-1-1 call positioning? Wired switch management?

Because adding features recapitulates our corporate commitment to value, driving growth by enhancing the utility of our customers' investments. It's a mutually beneficial arrangement, and one that stands in sharp contrast to a forklift mentality.

The next time you consider an IT vendor consider how they deliver innovative features. With a hand outstretched in partnership or reaching for your wallet.

02 April 2010

Adversity Drives Innovation

Economic downturns are commonly viewed as a time of retrenching and cut-backs, but they're also times of intellectual ferment and innovation. While budget cuts and scaled back programs create adversity, there remains a job to do and customers to satisfy.

The issue is how to accomplish this with fewer available resources.
To do this you have to get creative, and adversity catalyzes the process. It is the gap between available resources and demand that drives innovation, creativity, and opportunity.

In the words of J.C. Maxwell, “adversity motivates.” Maxwell’s "Benefits of Adversity" identifies the positive attributes of adversity:

1. Adversity creates resilience;

2. Adversity develops maturity;

3. Adversity pushes the envelope of accepted performance;

4. Adversity provides greater opportunities;

5. Adversity prompts innovation;

6. Adversity recaps unexpected benefits;

7. Adversity motivates.


The present downturn is no exception. IT managers face budget and headcount cuts, yet the companies for which they work cannot stop running. Leveraging investments in existing infrastructure, minimizing major new capital investments, and recouping savings from company operations are the new marching orders. If satisfying existing needs was good enough then the task at hand would be straightforward – weather the adverse economic climate by cutting as much spending and headcount as possible.


But in business it isn't that simple. The end of any downturn is followed by an uptick that will require increased IT services. Cut too far today and IT won’t be able to respond tomorrow. Business will suffer - again. IT managers must therefore be cognizant of the future and look at changes and cuts with an eye towards their impact on a future recovery.


This begs the question – is it possible to batten down the hatches to survive the current economic storm while laying the foundation for a future recovery? The answer is yes...but the challenge to doing so, surprisingly, is neither technological nor monetary but conceptual.


Doing more with less requires a new way of thinking about problems. In the IT world it means reconsidering the value of overbuilding complex, expensive infrastructure. In this market, in this economy, the first priorities need to be streamlining costs, boosting productivity, and enhancing efficiency.


A simple example will drive home the point. To lower costs, most enterprises are reducing their real estate footprints. Today 88% of employees work somewhere other than the corporate headquarters - many hotel in branch offices, work from home, or work on the road. The traditional way in which these remote users would be served is with a branch router
. This paradigm might be acceptable for a large office but it's outrageously expensive for a branch of just a few people.

The challenge is how to network a large and growing remote workforce in an environment focused on cost reduction. It is here that adversity catalyzes innovation. By standing the problem on its head and saying the real issue is how we enable mobility at low cost for a large number of users - not how we connect a branch office - new, non-traditional solutions emerge.


To a router vendor every problem ends with a hardware-based solution - it is the proverbial key under the streetlight.
Reconstituting the problem expands the area of illumination, revealing, for instance, that cloud-computing and virtualization are new options not previously considered.

Simply reframing a question can open a completely new set of solutions. Adversity forces the process by highlighting the inadequacy of
the “old school” way of thinking and opening the door to innovative new solutions. Ones that focus on today's needs instead of yesterday's answers.

01 April 2010

VBN Killed The Branch-In-A-Box


In 1979 The Buggles released their debut single, 'Video Killed The Radio Star,' a nostalgic look at radio from the perspective of the video age that killed it.

Progress drives on, looking nostalgically in the rear view mirror from time to time, but propelled forward by the engine of our insatiable desire for something better.

Tube-based table radios are nostalgic. So are rotary phones, wooden plows, and iron clad ships. Doesn't mean we want to use them anymore. They were abandoned because something better came along. Something easier to use. Faster.Less expensive.

Technology transitions happen all the time in enterprise IT, but the branch office and fixed teleworker seem to have been neglected along the way. And what an oversight it was. Today more than 85% of employees work outside of the primary corporate campus. Yet they need - but haven't had - the same access to corporate network resources and applications as someone in the home office.


The solution cobbled together by router vendors was to remotely replicate the infrastructure that's on the corporate campus. That is, assemble a stack of appliances for security, VPN, Wi-Fi, routing - and then try to integrate them to work together.


Over time the separate appliances morphed into an integrated branch-in-a-box router. But experience showed that while you can morph a router from a hairball, but you can never take the hairball out of the router. From the user's point of view, the solution was little improved.

The fundamental problem is that the campus network and its branch offspring were designed assuming static users sitting behind protective firewalls. Mobility - mobile users specifically - breaks that model. You have to punch holes in firewalls, configure complex VLAN assignments for segmenting traffic and user types, install VPNs to protect roaming users. The list goes on and on. And grows more expensive, complex, and user unfriendly as it does.


Virtual Branch Networking (VBN) 1.0 was introduced in 2009 as a ground-up, mobility focused solution. VBN made it less expensive and simpler to securely connect remote users with the enterprise network at low cost and without changing the user experience.


VBN 2.0 goes one giant step farther by leveraging cloud services to do the job done by branch routers today - application acceleration, content security, remote access. Only it does so using a lower cost, more scalable solution that delivers a consistent user experience regardless of where you work: in the corporate HQ, in a branch office, from home, or on the road.


The cloud provides a massively scalable, economical way of delivering services and applications. It has changed the way we transfer data, download files, and use applications. When applied to branch networks, cloud services are the perfect tonic. They deliver essential business-critical services, without complexity, to widely distributed users at less than half the cost of the branch in-a-box router. This is one change you'll make and never, ever look back.


In my mind and in my branch,

We can't rewind it bought the ranch,

VBN killed the branch-in-a-box.


Read more about VBN 2.0 on-line.

07 March 2010

The Lessons of Wi-Fi #14: Wi-Fi Should Save Money, Not Waste It

The computer science graduate students shuffle into class, taking their assigned seats. The professor opens the lesson by asking if there are any questions about the assigned reading.

A student raises her hand and asks, "We live in such a complex world. How could it possibly have been created in just 7 days." Without a moment's hesitation the professor looks up and responds, "Because there was no installed base – it was a new deployment."

Retrofitting 802.11n Wi-Fi to an existing network requires consideration of a number of factors: switch capacity, cable length, cable capacity, power sources. The last item is especially important during the transition to 802.11n. Many 802.11n access points far exceed the current capability of existing
802.3af Power-over-Ethernet (PoE) sources. Some require an astounding 32 Watts or more, far beyond the capabilities of 802.3af.

Unless you read the fine print in product data sheets you could find yourself exceeding the power delivery capabilities of both power sources and a single Ethernet cable. A Wi-Fi network that was supposed to reduce the cost of IT infrastructure by doing away with unneeded wired ports and switches could instead result in a whopping big bill to replace PoE infrastructure.

The Lessons of Wi-Fi #14: a Wi-Fi network should save money, not waste it. If you have to add supplemental power injectors, especially mid-span power sources, labor and hardware costs will soar. Power-hungry access points and high-current injectors also generate a lot of heat, so you'll incur higher recurring cooling costs. And your carbon footprint will grow.

Aruba's 802.11n access points operate from 802.3af power sources. Always have. In fact, we were the first company to introduce an 802.3af powered 3x3 MIMO access point. The access points also feature a lifetime warranty because the company stands behind what it builds.

As you consider an upgrade to 802.11n, be certain that 802.3af delivers sufficient current to power all of the radios to their full operating mode in every access point. If the data sheet says you need something other than a single 802.3af supply operating over 100m of cable to get full performance, consider yourself warned.

So check out our range of
802.11n access points and leave it to someone else to relearn the lessons of Wi-Fi.

06 March 2010

The Lessons of Wi-Fi #12: Your Wi-Fi Network Should Not Be A One Trick Pony

Time was when you left work at the office. Those days are long gone. Enterprises and institutions with workforces, offices, or colleagues spread across time zones often have time- and location-shifted working conditions.

Users might need to work from home, on the road, or at a remote site. In all cases, a user will be most productive if the network experience - and access to applications and network resources - is the same remotely as it is at his or her desk at work.


Can the wireless LAN infrastructure that's used in a campus environment pull double duty and be used by remote users, too? The stock answer from most vendors is "nary the twain shall meet" - use a campus wireless LAN at work and a remote access solution like a virtual private network (VPN) everywhere else.

Since using a VPN is very different than accessing a campus network, this means that users need to be trained how and when to use the appropriate access method.
And that means Help Desk calls. The end user is stuck with two parallel, non-intersecting networks to buy, maintain, and learn. Ouch!

The Lessons of Wi-Fi #12: your Wi-Fi network should not be a one-trick pony. One common network infrastructure should support both the campus wireless LAN and off-site users.
And it should provide an identical end user experience regardless of how or where the network is accessed.

Enter Aruba's Virtual Branch Networking (VBN) technology. VBN
uses low-cost Remote Access Points (RAPs) to securely connect remote users, and their Wi-Fi and wired Ethernet devices, back to a controller in the data center. The same controller that runs the campus Wi-Fi network.

Any
standard Aruba indoor access point can be used as a RAP. That means one SKU can serve as both a campus AP or a Wi-Fi enabled remote access device for a home, branch office, or road warrior.

The $99 list price RAP-2 unit pictured here is small enough to fit in a shirt pocket or valise. It works with any IP-based devices - laptops, iPhone, iTouch, PCs, printers, wired and wireless voice over IP phones, wireless projectors - all of which can simultaneously share a single RAP. As can multiple users.

VBN features one-button installation so that a non-technical person can provision a RAP-2 by him or herself. No IT assistance, no user training required. Once commissioned the user just turns on his or her MacBook, PC, iTouch and they're instantly connected to the network...just as they would be on campus.

Data encryption and an integrated firewall provide comprehensive network security for all RAPs, while centralized management ensures speedy diagnostics and updates right over the network.


You don't have to suffer a double budget hit to get best-in-class campus Wi-Fi and secure remote access. So check out VBN and leave it to someone else to relearn the lessons of Wi-Fi.

04 March 2010

The Lessons of Wi-Fi #11: Aesthetics Matter

If you walk around most any IT trade show, a harsh reality sinks in. While a lot of engineering goes in hardware and software design, spending is often miserly when it comes to packaging design.

Consumer companies hire world-class designers - or design firms like IDEO - to create products with rakish, timeless good looks. The resulting products fit well in virtually any decor.


Step into the enterprise market and things change. Evidently many enterprise vendors believe that function trumps form. Make a product function well and no one will care that it was hit with the ugly stick. Even if the products are intended for open display - on ceilings in Board Rooms, classrooms, branch offices.


The Lessons of Wi-Fi #11: aesthetics matter. Businesses and institutions spend fortunes, large and small, with architects and interior designers to ensure that their facilities are attractive. Every component that goes into a building - from fire sprinkler heads to smoke detectors to wiring devices - must pass muster. How could any IT vendo
r believe that the very same aesthetics standards don't also apply to IT gear. Especially publicly visible devices like Wi-Fi access points.

Visit an IT trade show and you'll see shoe-box sized APs, bristling with dark, leg-like antennas. And squat APs, disk-shaped like the calling card of a digital elephant. And bulbous APs shaped like a knight's helmet.

In the landscape of the ceiling, camouflage is paramount: a diminutive, sleek design with neutral colors and a shape that matches other ceiling fixtures fits in best.
At Aruba we use world-class packaging designers to help our indoor access points blend into their surroundings. Our AP-105 Access Point is the smallest enterprise-class 802.11n AP on the market, and neutrally blends into any public environment. While its stellar performance calls attention to the product, its packaging does not.

You don't have to compromise aesthetics to get best-in-class Wi-Fi. So check out the AP-105, and leave it to someone else to relearn the lessons of Wi-Fi.

03 March 2010

The Lessons of Wi-Fi #10: A Bad Tool Will Never Find A Good Network

You need a new Wi-Fi network for your school. The legacy system is a patchwork of consumer Wi-Fi gear and just can't handle your multi-media, throughput, and security requirements. Moreover the old network is a bear to manage because it doesn't provide any diagnostic information about the cause of increasingly frequent network outages.

One of the vendors you call in gives you a nifty sales pitch about their newfangled access points and even throws in a free network survey. When you ask about network management the sales person says they have a system that automatically discovers, configures, and monitors the whole wireless network, and can scale from single sites to cover the whole school district.

"But what
if a problem originates in the wired network or in a mobile device? Or I want to manage the wired switches? How do I handle those scenarios?" you ask. All you draw in return is a blank stare.

The Lessons of Wi-Fi #10: to paraphrase a late13th century French proverb, mauvés hostill ne trovera ja bon network - a bad tool will never find a good network. Network management is really about optimizing operations management, about how to keep a network running 99.9999% of the time. Configuration and monitoring are only small pieces of the work that needs to be done.

Physicians train for hundreds and hundreds of hours to properly handle emergencies. Why? Because patients rarely die waiting for routine check-ups. It's in an emergency - when the stakes are high and time is very short - when they must prove their mettle. The same is true for network management tools.

Wireless networks don't work in isolation. Their operation depends on a wired core, closet switches, cabling, and the mobile devices with which they're associated. A fault could happen anywhere along this chain but "look" like it originated in the Wi-Fi network because that's where the problem first surfaced. A monitoring and diagnostics tool that only looks at the operation of the wireless network will stumble badly in this situation. And the consequence? Classes come to a halt, business stops, patients wait. Pretty bad.

Aruba's AirWave 7 tool is different. It's an operations solution that integrates the management of wireless networks, wired infrastructure, and client devices into a single interface. AirWave 7 provides a single point of visibility and control for the entire network edge, including wired and wireless infrastructure as well as individual client devices. In so doing, AirWave 7 reduces the cost and complexity of network management, while improving service quality for users.

A Mobile Device Management module gives IT managers control over mobile client devices from the same intuitive console they use to manage the network infrastructure. From a single console managers can supervise mobile devices, access points, controllers, and wired edge switches, including vital performance data, port utilization statistics and error data. By integrating monitoring of the wired and wireless infrastructure, the software facilitates faster and more accurate root-cause analysis.

And AirWave 7 is a multi-vendor tool.
It works with Cisco and HP switches, among others, and supports wireless LANs made by more than 15 vendors, including Aruba, Cisco, HP, and Motorola. You're only out of luck if you own non-standard products or products from small niche vendors.

If you'd like to get the whole picture on network management you've only to visit the AirWave product site to see what real operations management can do for you. And leave it to someone else to relearn the lessons of Wi-Fi.

01 March 2010

The Lessons of Wi-Fi #9: Use Analysts & Audited Financials To Validate Vendor Claims


A loud-talking ranchman applies to a banker for a loan. The banker asks a neighbor if the rancher is a good credit risk. The neighbor ponders for a moment and then replies “Big hat, no cattle.” False bravado is funny when it’s the stuff of fiction, less so in real life – especially for customers snagged by rhetorical barbs.

And yet it happens again and again. Each year the networking world is introduced to “big hat” products with features and specifications so too-good-to-be-true that we let ourselves be reeled in. Why we don’t see through the shiny veneer and ask for proof of pedigree is a wonder. But it happens all the same.


The Lessons of Wi-Fi #9: use analysts and audited financials to validate vendor claims. Neutral independent industry analysts like Burton Group,
Canalys, Gartner, IDC, Infonetics, InfoTech, and Yankee Group can quickly assess vendors' technical claims.

And don't forget to check financials - audited financials - because you want your vendor to be in business should you need assistance or spare parts. If a vendor won't give up the numbers - or the numbers are substandard - then you have grounds for real concern.

A quick example will put the discussion in context. In 2008 a “big hat” four-radio 802.11n access point was announced that claimed to deliver 1.2 gigabits-per-second of aggregate capacity. The data sheet claimed that the four radios worked in tandem, enabling users to dramatically reduce the number of access points and additional security sensors, thereby reaping savings on cabling, connection and installation costs.


Still, the press ate it up. A flurry of articles expounded the virtues of delivering multiple HD streams to an entire building, with perfect coverage, at almost no cost. The world would soon be saturated with multi-adio APs, the unwashed masses blanketed with 802.11n. Wow, where do I sign up?


Fast forward to late 2009. The “big hat” super duper access point was no more. It simply vanished from the vendor’s Web site, its demise a secret. Was it ever built? No. But the company received undeserved publicity and that reeled in some unsuspecting customers
.

To paraphrase Orson Wells, companies should herd no cattle before their time. Industry analysts can help you separate claims from reality. If an analyst says that a vendor can't excute well, refuses to divulge shipment numbers, and/or lacks technical vision - well, your due diligence is over.

The next time you see or hear about a product that appears to be too good to be true,
separate the hats from the herds - kick the tires, test the features, validate the design. Those impressive features might be chimeras or, as with Aruba's AP-105 802.11 Access Point, the genuine article.

The Lessons of Wi-Fi #8: You Can Fund Your Wi-Fi Deployment By Rightsizing Your Wired LAN.


By any measure the California State University (CSU) system is enormous, encompassing 23 different campuses, nearly 450,000 students, and 48,000 faculty and staff.

Recently the university system was faced with a massive and potentially hugely expensive wired network refresh to upgrade infrastructure that was approaching the end of its service life. At the same time, the CSU system was experiencing a surge in the demand for network access across all of its campuses. In the absence of a budget for a Wi-Fi solution, which would have allowed one wired port to be simultaneously shared among many users, the IT staff was concerned that the need for Ethernet ports and switches would double.


What would you do in this circumstance? Expand the wired network? Seek additional funds for a wireless initiative? Restrict access to the network?


Those who forget the lessons of Wi-Fi are doomed to repeat them. Lesson #8: you can fund your Wi-Fi deployment by rightsizing your wired LAN.


Cisco suggests that the right solution was to expand the wired network with perhaps a smattering of wireless in lecture halls. Why? In a paper titled True-Sizing the Network, Cisco claims that Ethernet is future proof, more secure, and more reliable than wireless networks. In fact it marginalizes Wi-Fi, relegating it to situations in which Ethernet cannot otherwise be used.

The twisted “true-sizing” message short changes end users because it fails to take into consideration changes in user preferences, markets trends, and technology that have occurred in recent years:

  • iSuppli reports that shipments of laptops surpassed desktops (38.6M vs. 38.5M) in 3 Q 08;
  • Yankee Group estimates that enterprises with no Wi-Fi access will drop from 43% in 2006 to just 3% in 2012;
  • Burton Group states that 802.11n marks the beginning of the end for wired Ethernet as the dominant LAN access technology in the enterprise;
  • Best-in-class Wi-Fi networks sport WPA2 encryption, wireless intrusion detection, policy enforcement firewalls, and FIPS 140-2/Common Criteria/DoD validation - making them equal or more secure than most wired networks.
The best solutions for end users originatefrom understanding how and where they want to use the network, and then designing networks that meet those needs.

Aruba's network rightsizing program defines just such a process - measure wired port utilization, consolidate ports in use into fewer switches, and deploy 802.11n wireless to address mobility needs. Use Wi-Fi everywhere you can, wired networks only where you must. If savings are to be had, the rightsizing analysis process will tease them out. If not, then that will also be made clear. Either way, the network rightsizing analysis will offer insights into network and port utilization that might not be intuitively obvious.


Returning to CSU, what the IT staff decided to do was to obtain more data by measuring wired port usage. What they found surprised them: wired ports across all 23 campuses were consistently underutilized. More than half of the wired ports had passed no packets during the previous six months.


Armed with these data, the team decided to embark on a new approach. Instead of upgrading the entire wired network, something they had historically done every 4-5 years, they looked at the opportunity before them with fresh eyes.


Wi-Fi was determined to be a reliable, low-cost option for delivering pervasive campus connectivity. Several campuses had already deployed some Aruba wireless LAN equipment, mostly for coverage in selected high-usage areas, and San Diego State University had built a relatively large WLAN on their campus. The Aruba WLAN had proven to be highly secure, scalable and reliable. It also allowed for a scaled-back refresh of the wired network, saving money by limiting upgrades only to the wired ports that were actually used.

CSU's IT staff created a database that included every telecommunication room, the number of ports in each room, and the number of those ports that were actively
used. A formula was developed to define the refresh requirements of each of the 23 campuses based on this measurement.

By applying this formula across all 23 campuses, CSU was able to save approximately $30 million by reducing the scale of the wired network refresh and enhancing network access with Aruba’s Wi-Fi solutions.


The CSU system still uses wired networks but they've been rightsized to address actual and projected utilization. Wireless network utilization has risen sharply, because users are taking advantage of the mobility afforded by the expanded 802.11n network. And CSU saved a whopping big chunk of change that can be applied to other programs and opportunities.


Network rightsizing is a proven method of assessing and adjusting your network infrastructure. The California State University rightsizing program is a testament to the validity and value of the rightsizing model.


While the rightsizing mantra is to use wireless wherever you can, wired only where you must, the model makes no presumptions about the right mix of wired and wireless access. Proponents of “true-sizing” maintain no such neutrality. Their bias towards Ethernet marginalizes Wi-Fi, and in so doing deprives end users of the potential cost savings and mobility/efficiency gains that organizations like CSU have obtained.

The Lessons of Wi-Fi #7: You Don't Need Unobtainium To Build Great Wi-Fi Products

Introduced in July 1979, the Zilog Z80 was an 8-bit microprocessor that operated on 1, 4, 8, or 16-bit data, had a 16-bit address bus, generated its own RAM refresh signals, and would run programs originally designed for Intel’s 8080 CPU. The flexibility of the design made it suitable for a very wide range of consumer, industrial, and military applications spanning from the Tandy TRS-80 computer to programmable logic controllers to naval weapon systems. Prices fell as volumes rose, and the Z80 was one of the most popular 8-bit CPUs for many years following its original introduction.

One of the wonders of semiconductor technology is that a standard part like the Z80 can find its way into so many different applications. The very same CPUs, memories, amplifiers, voltage regulators, and/or transceivers found in consumer products in your home might be found in automobiles, office equipment, factory production lines, airplanes, or ships. What differs is how the part is applied, packaged, and tested. In other words, you don't always need custom parts made of unobtainium to perform specialized tasks in demanding environments.

What happened to the Z80 in the 1970s is happening today with 802.11n chip sets. Chip set vendors are designing a common set of 802.11n parts for use in enterprise, SMB, gateway, and home access point and router products. Doing so drives up the volume of sales, resulting in production economies that boost profit margins for chip vendors even as prices fall for end users.

One of the largest Wi-Fi chip vendors – Atheros – sells its AR9002AP-4XHG chip set for all of the above referenced applications. The chip set features extensive component integration, a small form factor, and low overall cost. The fact that the AR9002AP-4XHG finds its way into such a diverse range of applications speaks volumes about the potential flexibility and robustness of the design. I say potential because whether the objective is realized or not depends on the implementation of the final Wi-Fi device.


Those who forget the lessons of Wi-Fi are doomed to repeat them. Lesson #7: you don't need unobtainium to build great Wi-Fi products.

Just as naval weapon system vendors leveraged a common Z80 design to create very unique and rugged products, so, too, has Aruba leveraged an 802.11n chip set targeted at a broad market in the design of its unique AP-105 802.11n Access Point. The AP-105 was tailored to demanding enterprise applications, and special care was taken in the design of the packaging, antennas, power supply, and security features to make the product both robust and exceptionally fast. A great AP, with a great standard 802.11n chip set, selling for a great price.

The result is an enterprise-class 802.11n access point that has higher throughput and more features than Cisco access points,
yet sells for roughly 40% less money. So much less that Cisco felt compelled to pull apart the AP-105 to find out what makes it tick (they did the same when Aruba's high-end kick-ass AP-125 802.11n Access Point was released).

Their conclusion? The AP-105 is unobtainium-free and therefore no better than a consumer product. You know, like that
cell phone you rely on for emergency calls 24x7, or that iPod that has delivered faithful service every day at the gym. Comparing the reliability of the AP-105 to that of a consumer product is not an insult. At the end of the day, Cisco still has to explain why the AP-105 is faster, more feature rich, less expensive, and easier to install than its own run-of-the-mill, over-priced, unobtainium-based access points.

So with the wind of good design at our backs, and unobtainium nowhere to be seen, the AP-105 is flying off the shelves, charting a path the Z80 followed.

The Lessons of Wi-Fi #6: Sleight-Of-Hand Is No Substitute For Good Product Design

Would you ever strap a PC to your ceiling and run it there? Probably not. What about inside the plenum space above the ceiling? Nope.

Accessibility aside, the ceiling and plenum are hostile environments for electronics that aren't specifically designed for the vibration, temperature extremes, and blown dust typical of these locations.


If you look inside devices designed for this environment - smoke detectors, passive infrared sensors, quality Wi-Fi access points - what you WON'T find are vibration-sensitive connectors (like SIMM sockets), moving parts (like fans), and modular circuit boards that could wiggle loose. These devices are typically designed to have high mean time between failure (MTBF) ratings, something impossible to achieve with commercial SIMMs or fan-based power supplies. It seems so intuitive...and yet.

Those who forget the lessons of Wi-Fi are doomed to repeat them. Lesson #6: sleight-of-hand is no substitute for good product design. Wi-Fi access points need to be designed from the ground-up to withstand the rigors of ceiling and outdoor mounting environments.

Consider Wi-Fi arrays, which are effectively PC motherboards with a fleet of sockets, add-on modules, plug-in connectors, and memory SIMMs. They even conjured up a fan-based PC-like power supply - no standard 802.3af Power-over-Ethernet here. And when it fails you've lost 4+ radios at one time. The only workaround is to double-up the number of arrays, a real budget sink. Arrays just aren't designed with long service life, energy efficiency, or network resiliency in mind. That's the reason why no leading vendors in the Wi-Fi market sell arrays.

Aruba Wi-Fi access points have n
o fans, no SIMM sockets. Our 802.11n access points are designed for the rigors of ceiling and plenum mounting, and run from standard 802.3af PoE. MTBF ratings are excess of 250,000 hours - more than 28 years. And should an access point go down, Aruba's Adaptive Radio Management adjusts the power of near-by access points to self-heal the coverage gap. Automatically.

They'll provide years of reliable service and are backed by a lifetime warranty. And they cost less than an array-based system. A lot less.

So the next time you consider upgrading your wireless LAN, think about the environment in which the equipment will be used. Reliable products don't happen by magic - they happen by design.

16 February 2010

The Lessons Of Wi-Fi #5: Eggs Break So Don't Put Them All In One Access Point


Let's consider an alternate ending to Lesson #4. You need wireless access across an entire floor of your building, and a Wi-Fi vendor with shiny white tasseled loafers planted on your desk says he has just the solution: a single16-radio access point that will provide coverage across the whole floor and will save you a bundle in installation costs. How can you go wrong? Think of the cost savings: only one access point to buy, only one access point to wire.

Those who forget the lessons of Wi-Fi are doomed to repeat them. Lesson #5: eggs break - don't put them all in one access point.

What appears alluring at first glance is really false economy. One single failure and there's nothing between you and a totally dead network - you'll have lost the entire floor.
A 16-radio access point on a single cable sounds cool but it only gives you coverage – not capacity (you'll need a lot more radios, cables, and switch ports for that. And it offers no redundancy against failures like a dead CPU or memory.

How about just throwing in a second 16-radio access point for redundancy? Even if you could align it to deliver the same coverage pattern, your hardware costs would be blown sky high. And if you're using 802.11n, you’ll further drain the bank by needing additional expensive power supplies and even more cables and ports.

With
a multi-access
point, multi-channel design, any coverage gap created by the loss of a single access point is mitigated by nearby access points. Load balancing handles high density scenarios while airtime fairness handles different mixes of 80211a/b/g/n clients. And using separate access points allows you to cover rooms and labs and lathe walls and metal-foil wall paper that can't be penetrated from outside - even by a single, centrally-located 16-radio array.

The question to ask yourself is what is the cost of a failure? How much will you lose if the entire office wireless network goes down for a day? Or students can’t access the Internet? Or a trade show network stops running? For most users, the cost of putting all of your eggs in one access point is too high.

You've now discovered why no major wireless LAN vendors pack so many radios into a single access point. It's false economy because it puts your business at risk should a failure occur.


And as far as cost differences,
they've all but evaporated with Aruba's newest 802.11n access points. You don't need to take my word for it - Gartner's 2009 Wireless LAN Infrastructure Magic Quadrant spells it out in black and white.

If you'd like to get the whole picture on Wi-Fi architecture you've only to download our free white paper, WLAN RF Architecture Primer. And leave it to someone else to relearn the lessons of Wi-Fi.


11 February 2010

The Lessons Of Wi-Fi #4: All Wi-Fi Vendors Live By The Same Rules of Physics

You've invited Wi-Fi vendors to your facility to discuss a new Wi-Fi project. You need wireless access across an entire floor of your building which includes open plan seating, conference rooms, and executive offices. This will be the primary form of network access and it needs to work. All the time.

It's late afternoon. A Wi-Fi vendor sits across from you in his white suit and black shirt, the very model of semi-neo-avant garde stylin. His shiny white tasseled loafers are firmly planted on the corner of your desk. He looks you straight in the eyes and says that his access point transmits radio signals farther than anyone else's. "It uses special technology. Yes, it's expensive, but by packing sixteen super duper radios in one unit you'll save a bundle because you only need one access point to cover the entire floor." Wow! How can you go wrong?


Those who forget the lessons of Wi-Fi are doomed to repeat them. Lesson #4: we all live by the same laws of physics, and no Wi-Fi vendor has yet bent them to their will.


The maximum output of a radio at any given frequency is dictated by local regulatory agencies. In most countries 100 milliWatts is the upper limit of what an indoor access point is permitted to output. Regardless of vendor and irrespective of Wi-Fi chip vendor - Atheros, Broadcom, Intel, etc. There is a level playing field when it comes to building radios.


What vendors can do is twiddle with antennas, using directional antennas to focus the allowed radio energy into more well defined beams. And, indeed, doing so can project radio signals longer distances.


The issue is that Wi-Fi networks are bidirectional - there's something on the receiving end of those directional antennas. Low power clients like iPhones and netbooks aren't equipped with directional antennas, much less ones that are easily focused on access points. They may be able to hear distant access points but the access points may be unable to hear them - even if directional antennas are used - because they don' use high power radios.


Additionally,as we learned in Lesson #3, bit rate is inversely proportional to range. In a shared medium like 802.11 where only one device transmits at any one time, lower data rates mean less available air-time for data on that entire 802.11 channel. So even if an access point and its clients can communicate, the throughput from the clients to the access point will be relatively low. Not good for voice. Not good for video. Not good for you.


You don't get something for nothing, but you can find yourself with nothing from something. The Wi-Fi standards anticipated the use of multiple access points, and that's how clients are designed to work. Pushing the limits of how far a Wi-Fi signal can be made to propagate has heuristic value, but when it comes to real-world deployments it can jeopardize the functionality and reliability of your network.


It's best just to tell the vendor to take his shoes off your desk and sell his wares elsewhere - you're having none of it.


If you'd like to get the whole picture on Wi-Fi architecture you've only to download our free white paper, WLAN RF Architecture Primer
. And leave it to someone else to relearn the lessons of Wi-Fi.