Testing
All Go tests should be written in one of two ways:
- As a test table, or
- As individual
t.Runsubtests Use test tables for most cases. Uset.Runsubtests when:
- The number of input arguments in the test table exceeds 3, or
- The complexity of assertions increases (we should never use
ifstatements in test tables), or - Individual test cases require unique setup logic that would need a setup function in the test table.
General Rules
- Always call
t.Parallel()at the top of every test function and within each subtest, unless:- It’s an integration test (files ending in
_integration_test.go). - It performs file I/O, shell commands, or interacts with SOPS or the OS files
- Has the potential to fail with
--race.
- It’s an integration test (files ending in
- Always use
t.Context()when acontext.Contextis required in tests instead ofcontext.Background(). - All assertions should use the
assert(andrequirewhen necessary) library. - Prefer one assertion per test when possible.
- Never use
elseblocks — use assert logic instead. - Never redeclare variables like
test := test(variable shadowing). - Use
gotas the variable name for actual results when comparing against expected values. - Test names should:
- Start with a capitalised first word,
- Use spaces between words,
- Not use the full title case (e.g.,
"Payload default","GoLang explicit true").
- Always include all relevant test cases, even edge or error conditions.
- If 100% coverage is not possible, explain why in a brief note above the test function (no inline comments).
Test Organisation
- One test function per exported function/method — add new test cases as subtests within the existing test function rather than creating separate test functions.
- Only create a new test function if:
- Testing a distinctly different aspect that warrants complete separation (e.g.,
TestTracker_AddvsTestTracker_Save). - The original test function would become unwieldy (>200 lines) with the addition.
- Testing a distinctly different aspect that warrants complete separation (e.g.,
- Group related test cases using descriptive subtest names that explain what’s being tested.
- Aim for comprehensive coverage within each test function rather than fragmenting tests across multiple functions.
Test Tables
The test should be:
- In a
map[string]struct{}format. Where the string is the name of the test. - The test loop should read: for
name, test := range ttwhereby thenameof the test table variable istt - Use consistent field names:
inputfor inputswantfor expected outputswantErrif the function returns an error
- For error assertions, write:
1assert.Equal(t, test.wantErr, err != nil)
- Avoid
if,switch, or branching logic inside the test loop. - Don’t add any code comments within the test unless explaining the why.
Example:
1func TestExample(t *testing.T) {
2 t.Parallel()
3
4 tt := map[string]struct {
5 input string
6 want string
7 }{
8 "Example Case": {input: "foo", want: "bar"},
9 }
10
11 for name, test := range tt {
12 t.Run(name, func (t *testing.T) {
13 t.Parallel()
14 got := DoSomething(test.input)
15 assert.Equal(t, test.want, got)
16 })
17 }
18}
Subtests with t.Run
- Use
requirefor preconditions (e.g. setup or function calls that must not fail). - Use
assertfor validation of expected outputs. - Use
t.Log()to describe sections within a subtest instead of comments if assertions are bigger. - Maintain readability and determinism — tests should clearly convey intent and run independently.
- Each test should be self-contained with no shared mutable state.
Example:
1func TestApp_OrderedCommands(t *testing.T) {
2 t.Parallel()
3
4 t.Run("Missing Skipped", func (t *testing.T) {
5 t.Parallel()
6
7 app := &App{Commands: map[Command]CommandSpec{}}
8 commands := app.OrderedCommands()
9 assert.Len(t, commands, 0)
10 })
11
12 t.Run("Default Populated", func (t *testing.T) {
13 t.Parallel()
14
15 app := &App{}
16 err := app.applyDefaults()
17 require.NoError(t, err)
18
19 commands := app.OrderedCommands()
20 require.Len(t, commands, 4)
21 assert.Equal(t, "format", commands[0].Name)
22 })
23}
Mocking
Mocks should only be introduced when a test depends on an external interface or system boundary — for example, Terraform execution, encryption providers, or file I/O wrappers.
- Prefer fakes or real in-memory types where possible.
- Place generated mocks under
internal/mocks/and prefix them withMock(e.g.MockInfraManager). - Clean up with
defer ctrl.Finish()and avoid over-mocking. - Use
gomockfor creating mocks. - Generate mocks into the
internal/mocks/directory using below’s example.
Example:
1go tool go.uber.org/mock/mockgen -source=gen.go -destination ../mocks/fs.go -package=mocks
Setup Functions
- If a test contains repeated setup logic (e.g., creating
Appinstances, default values, or common test data), scan for asetup(t)function. - If no
setup(t)function exists, create one to encapsulate reusable logic. - The
setup(t)function should:- Accept
t *testing.Tas an argument. - Return any values required by multiple subtests (e.g., test structs, default app objects).
- Call
t.Helper()at the start.
- Accept
- Use
setup(t)in subtests to maintain readability, avoid duplication, and keep each test self-contained.
1func setup(t *testing.T) *App {
2 t.Helper()
3
4 app := &App{Name: "web", Type: AppTypeGoLang, Path: "./"}
5 err := app.applyDefaults()
6 require.NoError(t, err)
7
8 return app
9}
10
11func TestApp_OrderedCommands(t *testing.T) {
12 t.Parallel()
13
14 t.Run("Default Populated", func (t *testing.T) {
15 t.Parallel()
16
17 app := setup(t)
18 commands := app.OrderedCommands()
19 require.Len(t, commands, 4)
20 assert.Equal(t, "format", commands[0].Name)
21 })
22}