Go Remote Integration Tests Coverage Profiling

Go Remote Integration Tests Coverage Profiling

Go 1.20 release brought us a new feature - Coverage profiling support for integration tests. In short:
  1. Specify -cover flag when building the Go binary go build -cover
  1. Provide an environment variable GOCOVERDIR when running the Go binary built in step 1 for it to write out profile data files. This will only happen at the end of the binary execution.
  1. Use go tool covdata to parse the profile data files. It can combine profile data from multiple binary executions.
 
This feature is pretty helpful for testing logic in main.go or places where we need to directly call external dependencies. However, how often do you run integration tests locally?
 
In cloud native world, we often run integration tests from a continuous integration workflow (e.g. GitHub Workflow) by:
  1. Packaging the Go binary in a container image
  1. Deploying it in a real test environment (a k8s cluster)
  1. Testing against the deployed binary
  1. Tearing down the deployment afterwards.
 
Such an integration test deployment tends to be ephemeral. How to retrieve the profile data files after the integration tests now becomes a challenge. E.g.
  • k8s provides a wide range of Storage Classes that provide persistent storage. But the data stored in these storage cannot be easily accessed outside of the k8s workload. E.g. How to easily read data from a GCE persistent disk from your CI workflow?
  • Cloud Run now supports mount Filestore or GCS storage. But the complexity of the work required is just too hard to justify.
 
To make things easier, I developed an experimental tool (Feel free to check out the source code! It’s simple.) - https://github.com/yolocs/go-integcov
 
TL;DR - Build a wrapper program that runs any arbitrary Go binary expecting graceful shutdown. The wrapper understands GOCOVERDIR and will upload all the files in GOCOVERDIR to the given GCS bucket once the internal Go binary exits. The wrapper is built into a container image (ghcr.io/yolocs/go-integcov) that’s based on a secure base image cgr.dev/chainguard/static , which can be used as a base image.
 
To use it:
  • Build a container image for integration testing:
      1. Build your Go binary with -cover flag and use ghcr.io/yolocs/go-integcov as the base image for your container image.
      1. Sample Dockerfile
      FROM --platform=$BUILDPLATFORM golang:1.20 AS builder COPY app.go ./ # Add -cover flag RUN go build -cover app . FROM ghcr.io/yolocs/go-integcov COPY app /app ENTRYPOINT ["/app"]
  • Deploy the container image:
      1. Instead of using the default ENTRYPOINT (which is likely your Go binary), use go-integcov (the wrapper program in the base image) to run your Go binary, e.g. go-integcov your-app. You can add your other flags as you normally would do.
      1. Specify one additional env var INTEGCOV_STORAGE with the GCS bucket to upload the profile data files.
      1. Sample k8s deployment
      apiVersion: apps/v1 kind: Deployment metadata: name: my-integ labels: app: my-integ spec: replicas: 1 selector: matchLabels: app: my-integ template: metadata: labels: app: my-integ spec: containers: - name: my-integ image: my-registry/my-integ:latest command: ["go-integcov"] # Override ENTRYPOINT args: ["app"] env: - name: INTEGCOV_STORAGE value: 'gs://my-bucket/my-folder' ports: - containerPort: 80
  • Finish your test and delete your deployment.
  • If everything works as expected, you can now download profile data files from GCS bucket you provided in INTEGCOV_STORAGE. From there, you can use go tool covdata to parse the coverage.
 
My experiment chooses GCS as the output storage. But there is nothing stops you to adopt the same logic but with a different storage 🙂
 

 
In this experiment, I try to keep the user experience as simple as possible while having the output profile data as accessible as possible from the CI workflow. I think it’s way simpler than mounting persistent storage and retrieving data from it. However, it still requires non-trivial change in the build process. It’s an open question whether it worths the trouble to just get some additional coverage profile?