Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[controller-runtime] log.SetLogger(...) was never called; logs will not be displayed. #505

Open
porridge opened this issue Feb 6, 2024 · 12 comments
Labels
bug Something isn't working

Comments

@porridge
Copy link
Member

porridge commented Feb 6, 2024

What happened:

Noticed this warning in a test run:

[controller-runtime] log.SetLogger(...) was never called; logs will not be displayed.
Detected at:
	>  goroutine 7 [running]:
	>  runtime/debug.Stack()
	>  	/usr/local/go/src/runtime/debug/stack.go:24 +0x65
	>  sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
	>  	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/log.go:60 +0xcd
	>  sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0xc000041180, {0x18e3701, 0x14})
	>  	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/deleg.go:147 +0x4c
	>  github.com/go-logr/logr.Logger.WithName({{0x1b57f18, 0xc000041180}, 0x0}, {0x18e3701?, 0x0?})
	>  	/go/pkg/mod/github.com/go-logr/[email protected]/logr.go:336 +0x46
	>  sigs.k8s.io/controller-runtime/pkg/client.newClient(0x13b1afa?, {0x0, 0xc000247960, {0x1b595f0, 0xc00051da40}, 0x0, {0x0, 0x0}, 0x0})
	>  	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:122 +0xff
	>  sigs.k8s.io/controller-runtime/pkg/client.New(0xc000474240?, {0x0, 0xc000247960, {0x1b595f0, 0xc00051da40}, 0x0, {0x0, 0x0}, 0x0})
	>  	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:103 +0x85
	>  github.com/kudobuilder/kuttl/pkg/test/utils.NewRetryClient(0xc00012cc00?, {0x0, 0xc000247960, {0x1b595f0, 0xc00051da40}, 0x0, {0x0, 0x0}, 0x0})
	>  	/go/pkg/mod/github.com/kudobuilder/[email protected]/pkg/test/utils/kubernetes.go:177 +0x125
	>  github.com/kudobuilder/kuttl/pkg/test.(*Harness).Client(0xc00012cc00, 0x0?)
	>  	/go/pkg/mod/github.com/kudobuilder/[email protected]/pkg/test/harness.go:323 +0x19f
	>  github.com/kudobuilder/kuttl/pkg/test.(*Step).Create(0xc0001475f0, 0xc00058c4e0, {0xc0007cc340, 0x1b})
	>  	/go/pkg/mod/github.com/kudobuilder/[email protected]/pkg/test/step.go:177 +0x6f
	>  github.com/kudobuilder/kuttl/pkg/test.(*Step).Run(0xc0001475f0, 0xc00062d5f0?, {0xc0007cc340, 0x1b})
	>  	/go/pkg/mod/github.com/kudobuilder/[email protected]/pkg/test/step.go:457 +0x265
	>  github.com/kudobuilder/kuttl/pkg/test.(*Case).Run(0xc00052d4a0, 0xc00058c4e0, 0xc0003fc8c0)
	>  	/go/pkg/mod/github.com/kudobuilder/[email protected]/pkg/test/case.go:362 +0x74e
	>  github.com/kudobuilder/kuttl/pkg/test.(*Harness).RunTests.func1.1(0xc00058c4e0)
	>  	/go/pkg/mod/github.com/kudobuilder/[email protected]/pkg/test/harness.go:401 +0x1ea
	>  testing.tRunner(0xc00058c4e0, 0xc0001a6b10)
	>  	/usr/local/go/src/testing/testing.go:1576 +0x10b
	>  created by testing.(*T).Run
	>  	/usr/local/go/src/testing/testing.go:1629 +0x3ea

What you expected to happen:

No warning :-)

How to reproduce it (as minimally and precisely as possible):

Not sure...

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • KUTTL version (use kubectl kuttl version): v0.15.1-0.20240201073944-c735bec25959
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@porridge porridge added the bug Something isn't working label Apr 24, 2024
@chenmj11
Copy link
Contributor

I also encountered this problem,this warning will be triggered when running a lot of harnesses

@porridge
Copy link
Member Author

This thread might (or not) have some hints on how to approach this.

@kumar-mallikarjuna
Copy link
Contributor

controller-runtime expects this struct for logging. We can likely use logr.New(...) for this. But the existing Logger interface would need to be made compatible with logr.LogSink.

https://github.com/kudobuilder/kuttl/blob/6f041b7d2f74e065bd7b96c6305cc224ff955181/pkg/test/utils/logger.go#L11C1-L17C2

More effort but does it make sense to migrate to zap?

@porridge
Copy link
Member Author

porridge commented Jul 4, 2024

the existing Logger interface would need to be made compatible with logr.LogSink.

This should be possible without touching the existing methods, right?

What would be the benefits of using zap? Sorry, I never quite figured out the logging landscape of kubernetes :-)

@kumar-mallikarjuna
Copy link
Contributor

the existing Logger interface would need to be made compatible with logr.LogSink.

This should be possible without touching the existing methods, right?

Not really, since the interface signatures must match.

What would be the benefits of using zap? Sorry, I never quite figured out the logging landscape of kubernetes :-)

zap is pretty much now the standard with K8s Operators now since it works well with controller-runtime. It's also easy to integrate with cobra and has neat structured logs.

@porridge
Copy link
Member Author

porridge commented Jul 4, 2024

the existing Logger interface would need to be made compatible with logr.LogSink.

This should be possible without touching the existing methods, right?

Not really, since the interface signatures must match.

I don't see any overlapping methods, am I missing something?

What would be the benefits of using zap? Sorry, I never quite figured out the logging landscape of kubernetes :-)

zap is pretty much now the standard with K8s Operators now since it works well with controller-runtime. It's also easy to integrate with cobra and has neat structured logs.

Right. However what matters for kuttl is whether it can act as a proxy to testing.T.Log?

@siredmar
Copy link

siredmar commented Sep 3, 2024

I am also facing this topic but i only have one harness.

@peimanja
Copy link

Seeing the same issue on every test run.

@tuxerrante
Copy link

Same here.

Please consider that this is making all testing pipelines to fail since debug output is written on STDERR

=== RUN   kuttl
    harness.go:464: starting setup
    harness.go:252: running tests with KIND.
    harness.go:176: temp folder created /tmp/kuttl2512534945
    harness.go:205: node mount point /var/lib/docker/volumes/kuttl-test-0/_data
    harness.go:158: Starting KIND cluster
    kind.go:66: Adding Containers to KIND...

[controller-runtime] log.SetLogger(...) was never called; logs will not be displayed.
Detected at:
	>  goroutine 4 [running]:
	>  runtime/debug.Stack()
	>  	/nix/store/wkbckbd30nlhq4dxzg64q6y4vm1xx4fk-go-1.22.1/share/go/src/runtime/debug/stack.go:24 +0x5e
	>  sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
	>  	/home/mowsiany/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/log.go:60 +0xcd
	>  sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0xc00005e780, {0x19e4a47, 0x14})
	>  	/home/mowsiany/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/deleg.go:147 +0x3e
	>  github.com/go-logr/logr.Logger.WithName({{0x1c96d40, 0xc00005e780}, 0x0}, {0x19e4a47?, 0x10?})
	>  	/home/mowsiany/go/pkg/mod/github.com/go-logr/[email protected]/logr.go:345 +0x36
	>  sigs.k8s.io/controller-runtime/pkg/client.newClient(0xc000649690?, {0x0, 0xc000444af0, {0x1c98520, 0xc00011c550}, 0x0, 0x0})
	>  	/home/mowsiany/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:118 +0xdb
	>  sigs.k8s.io/controller-runtime/pkg/client.New(0xc0003bc008?, {0x0, 0xc000444af0, {0x1c98520, 0xc00011c550}, 0x0, 0x0})
	>  	/home/mowsiany/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:98 +0x55
	>  github.com/kudobuilder/kuttl/pkg/test/utils.NewRetryClient(0xc0003bc008, {0x0, 0xc000444af0, {0x1c98520, 0xc00011c550}, 0x0, 0x0})
	>  	/home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/utils/kubernetes.go:177 +0xf1
	>  github.com/kudobuilder/kuttl/pkg/test/utils.WaitForSA(0xc0003bc008, {0x19d0941, 0x7}, {0x19d0941, 0x7})
	>  	/home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/utils/kubernetes.go:994 +0x85
	>  github.com/kudobuilder/kuttl/pkg/test.(*Harness).waitForFunctionalCluster(0xc00026e308)
	>  	/home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:294 +0x38
	>  github.com/kudobuilder/kuttl/pkg/test.(*Harness).Config(0xc00026e308)
	>  	/home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:275 +0x450
	>  github.com/kudobuilder/kuttl/pkg/test.(*Harness).Client(0xc00026e308, 0xb0?)
	>  	/home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:318 +0x11b
	>  github.com/kudobuilder/kuttl/pkg/test.(*Harness).Setup(0xc00026e308)
	>  	/home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:466 +0x225
	>  github.com/kudobuilder/kuttl/pkg/test.(*Harness).Run(0xc00026e308)
	>  	/home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/test/harness.go:455 +0x59
	>  github.com/kudobuilder/kuttl/pkg/kuttlctl/cmd.newTestCmd.func2.1(0xc0001411e0)
	>  	/home/mowsiany/go/src/github.com/kudobuilder/kuttl/pkg/kuttlctl/cmd/test.go:236 +0x105
	>  testing.tRunner(0xc0001411e0, 0xc00018acf0)
	>  	/nix/store/wkbckbd30nlhq4dxzg64q6y4vm1xx4fk-go-1.22.1/share/go/src/testing/testing.go:1689 +0xfb
	>  created by testing.(*T).Run in goroutine 1
	>  	/nix/store/wkbckbd30nlhq4dxzg64q6y4vm1xx4fk-go-1.22.1/share/go/src/testing/testing.go:1742 +0x390
    harness.go:278: Successful connection to cluster at: https://127.0.0.1:41747/

@porridge
Copy link
Member Author

@tuxerrante what do you mean exactly by

making all testing pipelines to fail

?

It's certainly not affecting any of my kuttl usage.. 🤔

@tuxerrante
Copy link

Hi,
I mean if you have set your internal CI task to not proceed on errors, which I believe it is the standard behaviour on most of the environments, having kuttl to write on stderr make our pipelines to fail until stderr is ignored, which is not something wanted.

Thanks

@porridge
Copy link
Member Author

Which environments are these? I cannot remember seeing a testing system that fails when it sees output on stderr (as opposed to failing on non-zero exit status which I agree is the sane way). In fact, I do recall seeing guidelines which suggest emitting diagnostic output on stderr, rather than stdout, even if these are not fatal. 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants