Top 10 Mistakes Network Engineers Make when Troubleshooting
Until we reach network perfection (which, let's be honest, may never really happen) engineers will face problems with network systems and the applications they support. Whether the issue is slow performance, poor voice/video quality, dropped connections, or other events that plague networks today, engineers need to continually hone their troubleshooting skills to stay on top of these business efficiency-killers. Additionally, they need to avoid the common pitfalls that all too many network engineers fall into when troubleshooting problems.
Let’s look at a few examples.
1. Making assumptions about the root cause of a problem.
Let's face it: we humans make assumptions based on what we think we know. When a problem strikes, we can't help jumping to conclusions, especially if we have both time and experience in a particular network environment. However, making assumptions can be a huge mistake. They can lead to nonsensical network changes, costly upgrades, and baseless "improvements"- all with our fingers crossed, hoping the problem will go away. This troubleshooting mistake should be avoided at all costs. Instead, before making these knee-jerk decisions, gather facts about the problem. Fully understand the who, why, where, what and how of the issue before changing a thing. Let fact guide every decision made.
2. "This fix worked before, let's try it again" Troubleshooting
Similar to mistake number one, this common response to network problems is also based on assumptions. We are all victims of our own experience, so it's easy to rely on our knowledge of what worked last time, thinking that the same will be true again. In many cases, a new problem will show the same symptoms as a previous one, but the root cause could be entirely different.
Before changing anything, make sure to isolate the problem domain to the network, server, application, or client. Clearly identify which component is to blame before trying the guess-and-change approach. Using tools that make use of SNMP, NetFlow, and packet capture, clearly isolate the problem to a layer before moving forward with the resolution.
3. Rebooting the Problem Away
From home routers to 10G switches, almost all electronic devices need to be rebooted at one time or another. It's just a part of how things work today. However, in some IT environments, rebooting a device has become the standard for first-step troubleshooting. This is especially true if a device or server reboot has worked in the past.
If rebooting a device resolves a problem, the fix may only be temporary, requiring another reboot in the near future. Of course, a reboot could be required after a software upgrade, patch, or configuration change. However as a first-response to a network problem, repeatedly rebooting a device will only mask the real root cause. Prior to rebooting a device, collect as much information as possible. For example, is the access point still responding to current users? Is the server accepting new TCP connections? Is the switch CPU at 100% utilization? This information may steer engineers toward the real root cause, rather than the temporary fix.
4. Upgrading the Problem Away
Upgrading from 1Gbps to 10Gbps should increase performance 10-fold, right?
No.
Seldom is this the case. All too often, when faced with network problems - especially ones involving slow performance - network engineers are tempted to increase WAN bandwidth, upgrade switches or routers, or implement acceleration technologies. It’s no secret; none of these "fixes" are free. In fact, upgrading as a first-response to a problem can drain the budget, frustrate managers, reduce business productivity, and at worst, cost the job of a network engineer (yikes!).
Before implementing a new technology or upgrading a system/device/connection, there are several important questions to answer: Why are we convinced that this device/technology improvement will resolve the issue? What IS the original issue? Is the problem really rooted in network capacity or latency?
While it’s nice to have new gear on the network, it’s not pleasant to see the look on a manager’s face when an expensive solution fails to address a problem. Upgrading key systems is warranted from time to time, but be careful when upgrading a device as a troubleshooting step.
5. Delivering new connections to users without validating them
We've all done it a million times. Unbox and configure a new switch, install it, patch in the uplink, connect the end user drop and watch the light go blinky-blinky.
Done, right?
No. There are several things that can impact the performance experienced by end users once they connect and get to work. Link negotiation, cable problems, interface hardware issues and other throughput killers can impact the connection.
Before officially delivering a link to an end user, it should be tested and validated. This includes measuring latency and throughput for each connection back to the core/data center. As we mentioned, most engineers will connect a link, look for a link light, send a ping and consider the link tested. However all of the issues described earlier would pass this test. Only a full performance test would validate the connection and reveal these problems before the users experience them.
6. Failure to create a baseline during normal network performance
When troubleshooting a problem, engineers often utilize monitoring tools to help them collect and interpret information about the network. Even though these tools can display an impressive amount of statistical data, it’s easy to get lost in the details if a “normal” baseline does not exist.
Before a problem strikes, effort should be made to properly baseline the network. This would include collecting traffic utilization and latency statistics on key network links, response time measurements on critical business applications, packet capture samples including typical conversations and protocols, and a complete wireless assessment. These reports will assist network engineers when an issue arises since they will know what "normal" is.
7. Lack of wireless tools and experience
Wireless can be a real pain, especially as more end user devices ditch the cable and go 100% Wi-Fi. This trend, as well as the increase in the voice and video applications these devices demand, has greatly elevated the scope and complexity of wireless environments. Even when these systems are implemented and maintained by seasoned RF experts, clients can still experience poor performance, network disconnections, and other frustrating issues.
Since the wireless environment is easily susceptible to performance problems, it is often the first to take the blame when a new event strikes. Many network engineers point the finger at the Wi-Fi simply because it is an area of the network they don't fully understand or lack the tools to analyze. Rather than have a huge network blind spot, network managers should invest in both tools and training to get engineers up to speed on wireless, equipping them to respond to problems in this domain.
8. Under-monitoring the Network
Problems that engineers face today are complex, intermittent, and manage to hide in the shadows of the system. It used to be that an up/down ping-based tool was all that was needed to monitor the network. This has drastically changed.
Resolving today's issues requires monitoring systems that are both network and application aware, making use of SNMP, NetFlow, and packet capture to leave no visibility stone unturned. These systems need to watchdog applications 24/7/365 to ensure that intermittent problems are caught in the act, rather than missing the event when monitoring systems are looking the other way.
9. Misunderstanding the operation of core technologies
What do spanning-tree, ARP, auto-negotiation, ICMP redirects and IP fragmentation have in common?
They are all old (20+ years for each) and absolutely critical for network operation. Well, maybe not IP fragmentation in every case but it was worth the mention. Network engineers need to ensure that they understand the core technologies that their state-of-the-art systems are built on. When prepping for that next vendor certification exam, don't leave out the protocols and technologies that still have a hand in keeping things running this year, and beyond.
10. Using laptop hardware to capture packets
Packet capture and trace file interpretation is the gold standard for deep-dive detail when investigating a problem. This analysis method is critical for finding the root cause of the issue, rather than just exonerating the network and throwing the problem over the wall.
When it comes to packet collection, a common mistake made by network engineers today is misunderstanding the limits of the hardware they are using to capture. Take Wireshark for example. This open-source tool is known and loved by engineers around the globe, and is the most downloaded networking tool available. However, most people use this tool on laptops or on untested hardware which cannot keep up with high-rate traffic streams. In fact, most standard laptops struggle to capture seamlessly at rates higher than 100Mbps!
Know the limits of the hardware used to collect packets before capturing in the data center environment. Missing packets from trace files can easily lead an engineer astray, increasing the time to resolution of a nagging problem.
Conclusion
This is not an exhaustive list - there are other pitfalls that engineers of all experience levels fall into from time to time. With a little bit of preparation and awareness of some common mistakes, engineers can reduce time to resolution, trim frustration, reduce costs or unnecessary expense, and avoid the headaches brought on when troubleshooting network problems.