Change Impact Analysis

How to tackle the side effects of your code changes in GitHub PRs

Foresight improves your observability capabilities over GitHub Actions and lets you monitor how your changes affect your codebase
Oguzhan Ozdemir
4 mins read
Untested code changes shall not pass!

Why Test Gap Analysis is important

Analyzing the test gaps in your continuous integration pipelines is pretty important because it can help you focus your testing efforts and resources much more efficiently by helping you identify what parts of code are being released without being tested.

Bugs mostly live in the code areas that have been changed recently. Such bugs can be avoided handily by looking at which code changes have not been tested yet.  Therefore, testers pay great attention to exhaustively testing new and changed code.

This is not always applicable in practice, unfortunately. Gap analysis in testing brings some substantial benefits for software team performance, applications performance, and end user gratification.

First of all developers and testers easily align with each other even if they live in different continents and time zones. This paper made in 2013 propounds how to apply this in practice. Secondly, test gap analysis makes it super apparent for developers and testers to know where to write new tests or how to extend existing tests/test suites. Additionally, it becomes easy to understand what the impact of the code changes will be while reviewing pull requests.

As a result, test gap analysis enables you to identify all untested code changes, so you can close these gaps before going to production.

Preparing an example project

Before we talk about the code change we are going to make, let’s set up the necessary environment. I have a sample application that our PM Burak has prepared in the past. It’s a simple to-do application backed by a small Redis database from Upstash. I did some refactoring to prepare the application for this post. It has some basic functionality, but to make sense of the Foresight’s features I will implement a simple addition.

Check out how to power up the observability of your GitHub Actions with these kits
https://github.com/runforesight/foresight-test-kit-action
https://github.com/runforesight/foresight-workflow-kit-action

You can simply fork the repository and create some repository secrets for the GitHub Action workflow. For the project, we will use Upstash as our serverless Redis provider and we need Redis rest URL and Token to run our project.

Go to upstash.com and sign up. Don’t worry, for the project we’re working on, there would be no cost. After signing up and creating a new database, you can get the related credentials on the Upstash’s console.

Once this is done, let’s sign up for Foresight and install its GitHub app to have access to our forked repository.

After we sign up, we should see the screen with Connect Pipeline instructions. Follow these instructions and install the Foresight app. Then, we can give access to repositories we choose. It will redirect us to Foresight once again. Select the repositories we want to include in our project and hit create.

At this point, it should document how to integrate a GitHub action to enhance the observability capabilities. In our project, all of this is ready and all we need is the API key. Let’s copy this as well and set them in our repository secrets. 

At the end, your repository secrets should include the following.

Now we are ready to run our workflow. 

Diving In

As an addition to the existing features, I’ll implement a basic profanity check for the todo items. Let’s create a branch and create a simple function in the `add.js` file as such;


function profanityCheck(item) {
    const wordList = [
        'bullshit',
        'damn',
        'douche',
        'stupid'
    ]

    return wordList.some(v => item.includes(v))
}

And in the default function, right before we add the item to our database, let’s add this check;


 // …
    if (profanityCheck(todo)) {
        return res.status(400).json({ message: 'Todo input has a forbidden word!' })
    }
    // …

Let’s push our changes and open a PR. In the end, add.js should look like this. See the example PR on GitHub.

We can now observe the GitHub Action workflow run from Foresight.

To see the details, click to the run and in there, we can see workflow run related information. We are focused on the tests and the change impact analysis, so let’s click there and see what’s there.

It seems we have 2 lines that are not tested with this PR. 

So, it seems like we don’t have specs that test any input with profanity or an empty input. We can fix that easily. Let’s add the following tests in our `todo.spec.js` file and push it to the branch to run the tests again. It might take a couple of minutes for the results to show up on Foresight.


test('fail to add todo item that includes a forbidden word', async ({ page }) => {
  // The fourth item in this array includes a forbidden word.
  let todoName = TODO_ITEMS[3]; 

  // Text input
  await page.locator('#todo').fill(todoName);
  await page.locator('#todo').press('Enter');

  await expect(page.locator('.Home_card__2SdtB').first()).not.toHaveText([
    todoName
  ]);

  await expect(page.locator('.error-msg').first()).toHaveText([
    'Todo input has a forbidden word!'
  ]);
});

test('fail to add empty input', async ({ page }) => {
  let todoName = '';

  // Text input
  await page.locator('#todo').fill(todoName);
  await page.locator('#todo').press('Enter');

  await expect(page.locator('.Home_card__2SdtB').first()).not.toHaveText([
    todoName
  ]);

  await expect(page.locator('.error-msg').first()).toHaveText([
    'Todo parameter required!'
  ]);
});

Now everything looks green.

All lines of changed code are covered with tests!

Conclusion

To sum up what we've done here, we've installed Foresight to get some visibility about our GitHub Action workflow runs and added Foresight's workflow and test kits to our workflow. We then simply added a small feature and Foresight showed us how this change affects our codebase without having to go through all the logs and coverage reports manually.

This improved our observability capabilities over GitHub Actions and monitored how our changes affected the codebase. With a few configurations, we can now monitor these effects in one place with ease.

To learn more about the kits and the product we've implemented, see the following resources:

https://runforesight.com

https://github.com/runforesight/foresight-workflow-kit-action

https://github.com/runforesight/foresight-test-kit-action