I always say I am no networking specialist… but I had to become one for this next post.

If you’ve ever tried to get docker Windows containers running on a NAT on a Physical Windows host with a trunk/tagged vLAN with no Native vLan this next post is for you!

Spoiler: NATTING Containers on a Physical Windows server with no native vLan doesn’t work!

Tested on W2K19, W2K22, W2K25

Picture this:

Some developer in your organization thinks it’s a great idea to build Windows containers

Things look peachy at first… you build them a vanilla physical Windows Server, setup docker daemon and hand them the box, then go get your smug self a cup of coffee….. a few minutes later they ping you to complain that the new fresh Windows containers they are deploying can’t route to the internet. You spit your coffee all over your keyboard and monitor, moan at them and blame the firewall team!

Unfortunately the firewall team tells you to check the logs and its all working!

Back to the drawing board you say, you check the networking stack on the physical server, the box can access the resources, you step into the windows container and nothing…. ping doesn’t work! no traffic of any kind comes back to the container!

You spin up Wireshark, able to see that the sweet sweet containers ping traffic flows to the destination but when the reply packet comes back, it hits the physical container host and disappears into the ether…. instead of into the container!!

In effect, the container is left hanging, and the host treats the packet as “not mine,” dropping it—or worse, getting lost in some kernel limbo.

This feels like a bug in the Windows networking stack triggered by NATTING VM’s on a host running on a non-native, tagged VLAN!

1). The physical Windows container host is on a tagged-only VLAN (e.g., VLAN 100 on a trunk), with no native VLAN set.

2). A container sends traffic out via NAT.

3). The remote server replies, but the return traffic lands on the host’s NIC (VLAN-tagged) and fails to get properly redirected into the container.

The Lazy-but-Legit Remedy (because good luck getting Microsoft support on the line)

There’s a no-frills fix that works every time: Host your NAT-enabled containers only on Windows hosts where the physical NIC is in a native VLAN… just plain vanilla untagged traffic goodness.

Why this works:

  • No idea.
    • best guess:
      • The NIC’s native VLAN lets the host kernel see those return packets as local traffic, allowing established NAT mappings to do their job.
      • No more routing misfires.
      • Just containers that actually… you know, work.

So yeah your network may look less “multi-VLAN fancy,” for these hosts but it’ will work for running containers with NAT on Windows hosts.

Hope you found this helpful

vMan