Tuesday, June 30, 2009
Managing the “Cloud” in Cloud Computing with Route Analytics
By Packet Design
Abstract:
The latest evolution in enterprise IT outsourcing, cloud computing leverages the ubiquity of the Internet, the flexibility of server virtualization, and the massive scale of today’s data centers to provide low-cost IT infrastructure as a network-based service. Though cloud computing is still in the early stages of adoption, enterprises are rightly concerned with how to manage infrastructure that resides on the Internet or shared service provider networks. But while much industry attention has been paid to systems, applications, storage and security issues, relatively little has been directed to the network management challenges of cloud computing. Cloud computing’s placement of critical infrastructure components outside traditional network boundaries greatly increases enterprise IT dependence on the complex interactions between enterprise and public IP networks.
To ensure reliable application delivery, network managers need visibility into the routing and traffic dynamics spanning enterprise and Internet domains. But traditional network management tools are incapable of providing this sort of insight. Route analytics, a network management technology adopted and deployed by hundreds of the world’s leading enterprises, service providers and government agencies, fills this visibility gap by providing routing and traffic monitoring, analysis and planning for both internal and external IP networks. Routing visibility is critical to ensuring the success of cloud computing deployments, Route analytics can provide this visibility and enhance network management best practices.
Friday, June 26, 2009
Why Virtualize? Can Virtualization Benefit your Enterprise?
By AT&T
Abstract:
Virtualization projects are the focus of many IT professionals who are trying to consolidate servers or data centers, decrease costs and launch successful “green” conservation initiatives. Virtualizing IT resources can be thought of as squeezing an enterprise’s computer processing power, memory, network bandwidth and storage capacity onto the smallest number of hardware platforms possible and then apportioning those resources to operating systems and applications on a time-sharing basis.
This approach aims to make the most efficient possible use of IT resources. It differs from historical computing and networking models, which have typically involved inextricably binding a given software application or service to a specific operating system (OS), which, in turn, has been developed to run on a particular hardware platform. By contrast, virtualization decouples these components, making them available from a common resource pool. In this respect, virtualization prevents IT departments from having to worry about the particular hardware or software platforms installed as they deploy additional services. The decoupling and optimization of these components is possible whether you are virtualizing servers, desktops, applications, storage devices or networks.
To virtualize some or all of a computing infrastructure’s resources, IT departments require special virtualization software, firmware or a third-party service that makes use of virtualization software or firmware. This software/firmware component, called the hypervisor or the virtualization layer, performs the mapping between virtual and physical resources. It is what enables the various resources to be decoupled, then aggregated and dispensed, irrespective of the underlying hardware and, in some cases, the software OS. In effect, the hypervisor takes over hardware management from the OS. In addition to the hypervisor virtualization technology, the organization overseeing the virtualization project requires a virtualization management tool – which might be procured from the same or a different supplier – to set up and manage virtual devices and policies.
Thursday, June 25, 2009
Interop Reports
(Click on the title above to access the paper.)
By Jim Metzler in Cisco Fact-or-Fiction Series
Abstract:
2009 Interop Reports
The Cloud: Jennifer Geisler and industry analyst Jim Metzler discuss all the hype around "The Cloud". Jim points out that customers must first invest in virtualization, automation and standardization. This will position them to do things faster and cheaper and make "The Cloud" a natural extension of their business process.
Applications: Jennifer Geisler and industry analyst Jim Metzler discuss the role of the network in delivering applications. Jim specifically calls out how technologies such as virtualization can have a dramatic effect on user experience and explores how IT professionals need look at application performance across the network.
Switching: Jennifer Geisler and industry analyst Jim Metzler discuss the drivers for the next generation LAN switch and how the Catalyst line of switches already deliver much of what businesses are looking for.
Routing: Jennifer Geisler and Jim Metzler discuss the dynamics of the WAN routing market and the changes driving innovations. Specific topics include the need for greater availability and security while optimizing price for performance.
Tuesday, June 23, 2009
How to Successfully Transform the Organization during IP Network Transformation
By Alcatel-Lucent
Abstract:
IP transformation is a complex and challenging journey that impacts all the service provider’s assets and activities. As in any major business undertaking, people are central to a successful outcome; therefore, close attention to human resources requirements is key during IP Transformation. The organization must be able to successfully integrate and transfer staff into the new environment and provide the means for the successful creation and implementation of methodologies for building teams to support each step in the business process. This paper provides an overview of guidelines for addressing the organizational change that must accompany transformation to an IP-based network and business.
2009 Mobile Unified Communications Buyer’s Guide
By Peter Brockmann, Brockmann & Company
Abstract:
This Buyer’s Guide defines the availability of features for mobile unified communications from an array of vendors each with unique target markets, channels and customers. The fact that customers have so many choices, even given one or more brand of telephony system suggests that the mobile unified communications application has in fact progressed to the point of consistently delivering capabilities that improve the productivity and security of mobile workers.
Buyers should note that not all features work the same way on all devices or may not be supported on all devices. The actual user experience will depend on the combination of system features presented here, the mobile operator services and the devices supported any of which can and frequently do change at any time, without warning. Buyers should always check with vendors for the latest feature availability, use case definition and device support and should always verify claimed functionality with product demonstrations and product trial prior to purchase.
Monday, June 22, 2009
Future-Proof Networking: Making Decisions That Last
By Cisco Systems
Abstract:
A funny thing happened on the way to the recession. As the global economy slowed in 2008, then came to a screeching halt in 2009, it sent a wave of change through the IT community. A mind shift gradually began to take place and CIOs found themselves pausing and reevaluating their investment decisions, realizing that now was the time to make sure every investment being made was not only in line with their company’s strategic vision, but also driving them diligently toward their goals.
IT infrastructure purchasing everywhere decisions were being scrutinized. Is this purchase strategic to our business? What will we really gain in the long run? Are we applying resources in the right areas? Will this investment proactively prepare us for the inevitable upturn as well as the next downturn? Every investment decision was being revisited to evaluate its worth.
IT has matured and become pervasive. The recognition of its value, the growing dependence companies have on IT and the role it plays as an integral part of business success is no longer debated. A company’s IT infrastructure is now recognized as vital to business growth and productivity. As a strategic part of doing business, many organizations have become more thoughtful in their IT investments and less apt to cut IT budgets and resources as a way to reduce costs.
Value is today’s challenge and motivator. The new test of a good value in IT has shifted from how inexpensively something can be purchased to how this investment serves the company’s strategic vision. If there is a silver lining to be found in this economic downturn, it might be that it brought us back to a more balanced way of making IT investments.
Friday, June 19, 2009
Dedicated Distributed Sensing - The Right Approach to Wireless Intrusion Prevention
By Motorola
Abstract:
Some vendors are offering integrated WIPS. These solutions provide only "check-box" functionality. Part-time scanning, typically used by these systems, has significant frequency and time holes. APs and sensors have different functional requirements and integrated solutions that try to use APs as sensors will have several limitations.
While it may seem that integrated solutions have lower cost, in fact, the normalized cost as well as the TCO is lower for Motorola’s dedicated WIPS.
Finally, WIPS is not just about rogue device management, it also encompasses everything from mobile worker protection to forensic analysis capabilities. Motorola’s dedicated distributed collaborative intelligence based WIPS offers the most comprehensive solution with the highest return on investment.
Wednesday, June 17, 2009
Ten Top Problems Network Techs Encounter
By Fluke Networks
Abtract:
Networks today have evolved quickly to include business critical applications and services, relied on heavily by users in the organization. In this environment, network technicians are required to do more than simply add new machines to the network. Often they are called on to troubleshoot more complex issues, thus keeping the network up and running at top speed. This whitepaper discusses ten common problems encountered by technicians today and their symptoms, causes, and resolutions.
Monday, June 15, 2009
The Impact of Virtualization on Application Delivery
A Webtorials Brief
Jim Metzler, Cofounder, Webtorials Editorial/Analyst Division
Abstract:
Desktop virtualization is a classic good news/bad news situation. The good news is that because it simplifies some management tasks, improves security, and increases the reliability of desktop services, desktop virtualization helps IT organizations achieve some of the goals of application delivery. The bad news is that if IT organizations don’t implement the appropriate optimization, control and management functionality, the deployment of virtualized desktops will result in unacceptable application performance.
Relative to optimization functionality, techniques such as TCP optimization as well as compression and caching can provide some performance improvement, primarily for applications that are not part of the VDI traffic stream. The real performance gains come from deploying QoS and bandwidth management in order to ensure that screens refresh in a reasonable amount of time as well as to ensure the acceptable performance of applications video.
Control functionality is needed in order to automatically protect keyboard strokes and screen refreshes from other traffic types and to also ensure sufficient capacity to effectively support audio and video traffic. Because of the complications created by both CGP and session sharing mode, it is not possible to implement this type of control by utilizing the ICA priority packet tag or by prioritizing flows based on the published application. This leaves prioritizing flows automatically according to their behavior as the most viable option.
The lack of management visibility is a barrier to application delivery independent of the IT infrastructure. However, a virtualized IT infrastructure is more complex than in a non-virtualized environment. This results in more sources of delay that can lead to unacceptable application performance. To compensate for this, IT organizations must implement solutions that give them the visibility to understand the user’s experience in real time. This type of visibility is necessary in order for IT organizations to focus on the company’s key applications and not just on the technology domains that support those applications.
Thursday, June 11, 2009
RFC 2544 Latency Testing on Cisco ASR 1000 Series Aggregation Services Routers
By Cisco Systems
Abstract:
This whitepaper examines and analyses traffic latency on the Cisco ASR 1000 Series Routers. The Cisco ASR 1000 has three forwarding engines known as Cisco ASR 1000 Series Embedded Services Processors (ESPs). This document will review the latency of two of those ESP forwarding engines, specifically the 10-Gbps Cisco ASR 1000 Series ESP (ASR1000-ESP10) and 20-Gbps Cisco ASR 1000 Series ESP (ASR1000-ESP20) forwarding engines. The goal of this whitepaper is to highlight how different forwarding rates impact the latency of the Cisco ASR 1000. This document highlights some of the choices that you must make while designing your network. This document covers the impact on overall latency relating to queuing, shaping and QoS which can impact the overall performance of your network.
The ASR1000-ESP20 was profiled in a WAN aggregation topology with services enabled to gain insight of how the system latency is affected while approaching the throughput non-drop rate (NDR).
This paper delivers results in two parts:
Phase 1: Reporting RFC 2544 latency results for IP routing with and without services enabled as detailed in the RFC 2544 Test Setup. The results reported are the latency at the calculated NDR for that packet size/test.
Phase 2: Profiling latency for different frame sizes at data points approaching the NDR in a WAN aggregation topology, in order to clearly illustrate and analyze the behaviour of the system.
Test results obtained from this testing are based on Cisco IOS XE release 2.2.2 for all tests. The routers were tested using procedures based on RFC 2544 Latency Testing.
Wednesday, June 10, 2009
Wireless LANs: Is My Enterprise At Risk?
By Motorola
Abstract:
Wireless technology is exploding in popularity. Businesses are not only migrating to wireless networking, they are steadily integrating wireless technology and associated components into their wired infrastructure. The demand for wireless access to LANs is fueled by the growth of mobile computing devices, such as laptops and personal digital assistants, and a desire by users for continual connections to the network without having to “plug in.”
Like most innovative technologies, using wireless LANs poses both opportunities and risks. The wireless explosion has given momentum to a new generation of hackers who specialize in inventing and deploying innovative methods of hijacking wireless communications, and in using the wireless network to breach the wired infrastructure. In fact, hackers have never had it so easy.
Tuesday, June 9, 2009
Telecommunications and IT Darwin Award Candidates
And while these examples may be somewhat humorous, one of the best ways to learn is from our own mistakes. And it's even better to learn from the mistakes of others.
Monday, June 8, 2009
Darwin Awards for Disaster Recovery
Gary Audin, Delphi, Inc.
Abstract:
“The Darwin Awards salute the improvement of the human genome by honoring those who accidentally remove themselves from it...”
No matter how smart the technologist, there are always glitches and gotchas when planning for a disaster. The problems I have seen and stories I have collected always demonstrate that the best laid plans are not necessarily the best plans.
The problems stems from assumptions made in the planning process or the exclusion of non-technical personnel from the planning process. I have encountered remarks from the non-technical person that when considered, are insightful and right on the mark.
The following 16 situations have been collected from clients, seminar and conference attendees, and friends. They may seem funny, even ridiculous. Remember they are real occurrences, not from a joke writer. I hope these will stimulate you to brainstorm your planning process with a wide number of personnel and consider what appear to be off the wall or outrageous thoughts.
Become an Expert Troubleshooter with Advanced OTDR Trace Analysis
Ensuring the Health of Tomorrow’s Fiber LANs
Fluke Networks
Abstract:
Experience designing cable and network testers has enabled a breakthrough in automated fiber trace analysis.
Automated OTDR trace analysis improves a user’s ability to determine the health of the fiber LAN by translating raw data into simple pass/fail results.
This white paper discusses how an OTDR detects and analyzes test results, and explains expected, as well as unexpected, trace data from fiber networks.
Thursday, June 4, 2009
Preview! Is Best of Breed Security the Best of Both Worlds?
Fact or Fiction Series
Sponsored by Cisco Systems
With Fred Kost, Director of Marketing, Security Solutions. Jennifer Geisler explores the different security segments, threat intelligence and tactical versus an integrated solution purchase.
Approximately 11 minutes
Is Green Switching a Red Herring?
Fact or Fiction Series
Sponsored by Cisco Systems
With Marie Hattar, Cisco VP of Marketing. Jennifer Geisler discusses green switching, the role of information technology in green initiatives, and device power consumption.
Approximately 9 minutes
Tuesday, June 2, 2009
The Mandate to Re-Engineer Enterprise Routing to Support Today’s Economy
A Webtorials Brief
Jim Metzler, Cofounder, Webtorials Editorial/Analyst Division
Abstract:
As IT plays an increasingly important role in the execution of enterprise business strategies, IT executives will need to place greater emphasis on developing technology strategies and initiatives that are tightly linked to, and highly supportive of business requirements. However, as emphasized by both The Architecture VP and The Architecture Manager, in many cases the IT organizations will have to anticipate these requirements with little input from the business unit managers.
The agility and the flexibility of the network to respond to new business priorities is highly dependent on the functionality and capabilities of the fundamental network infrastructure. IT executives can solidify the strategic role of their network by ensuring that the infrastructure’s most critical components, including data center class routers and switch/routers, are capable of supporting both current and emerging initiatives.
Because the infrastructure cannot be refreshed every time there is an adjustment in the business strategy, network designers need to provide some headroom in terms of both functionality and performance that anticipates possible future technology initiatives to the degree that is possible in today’s rapidly changing business and technology environments. While it is not possible to predict with certainty the exact business and technology changes that will impact a given IT organization over the next year or two, it is possible to predict with certainty that change will occur. The router functionality discussed in this white paper is a key enabler to a wide range of technical strategies that will allow IT organizations to respond to these changes.
Monday, June 1, 2009
Unified Communications: Holding Its Own in Tough Economic Times
A Webtorials Analysis
by Steven Taylor and David DeWeese
Abstract:
Members of two Nortel user groups - the International Nortel Networks Users Association (INNUA) and the Nortel INSIGHT100 large-campus user group - were invited to participate in a survey about Unified Communications (UC).
The key findings of this analysis are:
- UC is still a priority for the vast majority of today’s enterprise organizations.
- Green IT remains important to UC customers despite changing economic conditions.
- A strong correlation exists between how important users think an aspect of UC is and how satisfied they are with the capabilities UC provides.
61% of the respondents reported that they already have or will have begun deploying UC within two years. This is essentially unchanged from last year when 60% reported plans to deploy within two years. Of course, this does indicate some delay, which is to be expected due to expenditures being frozen or reduced.
Significantly, though, in terms of relative importance for expenditures, UC moved from fifth out of ten options in 2008 to third out of eleven options this year.
Furthermore, only 6% of the respondents described themselves as being among the first to implement new technology, while 81% of the respondents were more mainstream adopters: 37% described themselves as early adopters who tended to wait “until we see the problems others have had” before implementing, while 44% described themselves as those who tend to do so once a new technology has become widely accepted. Moreover, 44% of self-described early adopters have already deployed UC, so organizations that intend to do so mostly fit the profile of the mainstream adopter, a sign of UC’s ongoing maturation.