• balsoft@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    Eww, no. You’re doing tests wrong. The point of tests is to understand whether changes to the code (or dependencies) break any functionality. If you have failing tests, it makes this task very difficult and time consuming for people who need it most, i.e. people new to the project. “Is this test failing because of something I’ve done? <half an hour of debugging later> Oh, it was broken before my changes too!”. If you insist on adding broken tests, mark them as “expected to fail” at least, so that they don’t affect the overall test suite result (and when someone fixes the functionality they have to un-mark them as expected to fail), and the checkmark is green. You should never merge PRs/MRs which fail any tests - it is an extremely bad habit and forms a bad culture in your project.

    • Kayana@ttrpg.network
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      2 days ago

      There are two different things mentioned here, which I feel I need to clarify:

      First, what you said about merging / creating a PR with broken tests. Absolutely you shouldn’t do that, because you should only merge once the feature is finished. If a test doesn’t work, then either it’s testing for the wrong aspect and should be rewritten, or the functionality doesn’t work 100% yet, so the feature isn’t ready to get merged. Even if you’re waiting for some other feature to get ready, because you need to integrate it or something, you’re still waiting, so the feature isn’t ready.

      At the same time, the OP’s point about tests being supposed to fail at first isn’t too far off the mark either, because that’s precisely how TDD works. If you’re applying that philosophy (which I personally condone), then that’s exactly what you do: Write the test first, checking for expected behaviour (which is taken from the specification), which will obviously fail, and only then write the code implementing that behaviour.

      But, even then, that failing test should be contained to e.g. the feature branch you’re working on, never going in a PR while it’s still failing.

      Once that feature has been merged, then yes, the test should never fail again, because that indicates a new change having sabotaged some area of that feature. Even if the new feature is considered “essential” or “high priority” while the old feature is not, ignoring the failure is one of the easiest ways to build up technical debt, so you should damn well fix that now.

      • balsoft@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        I concede that on a feature branch, before a PR is made, it’s ok to have some failing tests, as long as the only tests failing are related to that feature. You should squash those commits after the feature is complete so that no commit has a failing test once it’s on master.

        (I’m also a fan of TDD, although for me it means Type-Driven Development, but I digress…)

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      You’re both right. You’re both wrong.

      • You write tests for functionality before you write the functionality.
      • You code the functionality so the tests pass.
      • Then, and only then, the test becomes a regression test and is enabled in your CI automation.
      • If the test ever breaks again the merge is blocked.

      If you only write tests after you’ve written the code then the test will test that the code does what the code does. Your brain is already polluted and you’re not capable of writing a good test.

      Having tests that fail is fine, as long as they’re not part of your regression tests.

      • balsoft@lemmy.ml
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 days ago
        • You write tests for functionality before you write the functionality.
        • You code the functionality so the tests pass.
        • Then, and only then, the test becomes a regression test and is enabled in your CI automation.
        • If the test ever breaks again the merge is blocked.

        I disagree. Merging should be blocked on any failing test. No commit should be merged to master with a failing test. If you want to write tests first, then do that on a feature branch, but squash the commits properly before merging. Or add them as disabled first and enable after the feature is implemented. The enabled tests must always pass on every commit on master.