Description
Hi
I have encountered a goroutine leak in my client using 1.8.5
:
goroutine profile: total 2472
2151 @ 0x436bb0 0x40506d 0x404e35 0x8ce854 0x8ce837 0x8cfac2 0x466431
# 0x8ce853 nhooyr.io/websocket.(*mu).forceLock+0x83 /Users/redacted/go/pkg/mod/nhooyr.io/websocket@v1.8.5/conn_notjs.go:234
# 0x8ce836 nhooyr.io/websocket.(*msgWriterState).close+0x66 /Users/redacted/go/pkg/mod/nhooyr.io/websocket@v1.8.5/write.go:224
# 0x8cfac1 nhooyr.io/websocket.(*Conn).close.func1+0x31 /Users/redacted/go/pkg/mod/nhooyr.io/websocket@v1.8.5/conn_notjs.go:142
When we moved back to 1.8.4
, the issue has disappeared.
Our client app is a load testing client so it would open connections and then close them to simulate "real" clients connecting, therefore new connections would be opened and for each one, Close
method would be called once. If Close
leaks a goroutine, a large number of goroutines causes us memory issues.
The server side is NodeJS using ws-7.2.3
. We're running all this Linux containers in Kubernetes.
I attempted to write a small repro case in Go (both client and server), with self-signed TLS certs as well to see if TLS could trigger the issue, but I didn't manage to repro the issue with it. AFAIK there isn't anything odd happening while the client and server communicate. Few synchronous messages and then server pushes a few updates. Either side may be the first one to close the connection, but they use close handshake.
Not sure if this is enough details, LMK if there is something else I can look up to help!