It turns out that logging putty sessions to a network share is, or can be a bad idea. I learned this after getting a new laptop at a new job. I had several workstations and wanted to log all of my putty sessions to the same folder – I figured this would help with troubleshooting at some point down the line. What I didn’t realize was that this would cripple me for a few days as I worked and tried to figure out why my sessions were so slow.
It didn’t help that I had just bought a sketchy USB-to-serial cable and was questioning the drivers I had installed. Putty sessions would often hang before I could even login to a device. How frustrating! I realized that when I was connected to a wireless network I had the most issues – using a wired connection was typically no trouble at all. At some point in troubleshooting almost any strange issue, I turn to Wireshark. Wireshark can be a great tool when you have a defined problem scope, which I didn’t quite have here – but I was on the verge of something.
I saw a lot of chatty SMB traffic going back and forth between my laptop and a file server. The destination folder matched my putty log folder. Suddenly it hit me – the added latency of a wireless connection combined with the chatty nature of the SMB traffic caused all of the ssh sessions from my laptop to be mostly unusable. So obvious and clear, why hadn’t I thought of it sooner? As soon as I changed the logging path to a local drive, my sessions sped up dramatically.
This is a strange problem that I suspect won’t help anyone specifically – but I do hope it’s at least mildly interesting.
Settling into a new job – I was working on what I thought was a routine change. Setup a spare switch in a temporary location with a basic config. Easy enough, right? The time came for me to configure a port on the upstream device. The device in question was a legacy Catalyst 65xx – a big chassis switch that I had read about but never had any experience with. The port I was going to use had a dozen lines of configuration already applied – mostly related to queuing. My first instinct was to issue a ‘default interface <slot/port>’ command and start from scratch. This is almost always the right thing to do, as it ensures no confusing stale configuration remains (I’m looking at you ‘switchport access vlan # / switchport mode trunk’).
Leaning over, I asked my co-worker if the ‘default interface’ command worked on these things. After being assured that it would be fine, I held my breathe and pressed ‘enter’.
I was greeted with several lines of output related to quality of service (QoS) being set to default values on a range of interfaces. Crap! Had I just wiped out the configuration for an entire line card?
No – it turns out the architecture of these switches is such that the queuing must be configured identically on specific groups of ports. I forget if it was all 48 ports on the card, or 16, or whatever, but the point is that I made a simple change and there were unintended consequences. At least Cisco was kind enough to leave me a message about it. And it didn’t bring the network down.
This was a reminder that even the most mundane, routine, everyday changes can go sideways when you least suspect it.