Unit Test Golf: ManagEnv

I have an open source project on GitHub I wrote called ManagEnv. I wrote it about a year and a half ago to deal with some problems I was having automatically deploying code while environment variables were being added and changed by developers. Obviously we try not to keep these in source control, so it became a bit unruly to figure out what environment variables needed to go in what environment. This project gave my team and I a way to centrally manage these environment variables, and then gave me a way to export a .env file automatically through Jenkins to be consumed by my docker containers.

There are other methods of doing this. You can use a tool like Chef or Ansible, or you can use AWS Parameter Store, or a host of other methods. This project wasn’t meant to replace those or be some sort of game changer in the area. It was meant to be simpler to set up and use, and only for this one specific purpose. If you need a way to manage environment variable files among your small developer team, it’s a decent solution.

Now that I’ve explained the project, I’ll explain why I didn’t write any tests for it: I was lazy and didn’t like writing unit tests at the time. I still don’t, by the way. I learned that I need to write them anyway, and I do for professional gigs. But honestly, when I’m developing my own project on my own time, I have mostly skipped out.

But now, here I am, wanting to write a blog post on good testing practices, and an open source project of mine has none. How could you even take me seriously?

So, I decided to try something different, in the spirit of making things better and having some fun. I’ve decided I’m going to try to do a speed run, of sorts. I’m going to try to get 100% unit test coverage while writing the smallest Test Suite possible.

The Rules

Test coverage is kind of a bad primary metric. The problem with it is doesn’t tell you anything about the tests that are being run. Whether the lines of code all get run at some point is secondary, what actually matters is if a test actually tells you if something is broken. Because of this problem, it’s fairly easy to game the system: I can make a test suite run every line of code and not actually test any of the output.

Since this is my project, I’d like these tests to be useful. So I’ll put a few rules in place to make them so:

1. Code Coverage will be generated by PHPUnit

This is obvious. the project is in Laravel 5.2, so PHPUnit is the platform.

2. Each test must measure at least one output or one side effect

This is to keep us honest: tests should actually test something.

3. Test Suite size will be measured by number of assertions

It’s hard to come up with a metric here that isn’t game-able, but I think this is best. Number of tests won’t work, since I can easily just write one “test” method, that runs everything else. Lines of code is kind of wishy-washy, but I’ll measure that too.

4. I’m Not Testing Views

This was a basic MVC project, and so blade views are used to generate HTML. I will test that the correct data gets passed to the views, but I don’t want to test that the correct HTML was generated. This is probably better achieved through a front-end testing tool.

5. I Only Care About Stuff I Wrote

I can assume that anything created or generated by Laravel is probably already thoroughly tested. I’m only looking to test the code that I added myself.

Let’s Get Started

I ran Code Coverage on the Example Test that Laravel comes with. I only care about what’s in the “app” folder here, and it looks like some of it is already complete. The truth here is that some of the files (specifically the Providers and Console Kernel) have no code in them, they’re just generated classes. So that’s where I’m starting. 95% of the lines left to go.

Normally with unit tests you’re going to want to start at the lowest level of the object hierarchy that you can. Here, you’d start with the Models (I have 2, Environment and Variable), and test all the public methods with a variety of inputs in order to get test coverage. However, since I’m playing golf here, I’m going to start from the highest level, and try to write a tests that hit as many different lines of code as possible before I end up at a Mock.

Laravel makes this ridiculously easy, so long as you follow along with the “Laravel Way”. In other frameworks I’m familiar with, I’m used to mocking out external connections, specifically database connections. This is actually pretty difficult with Laravel and Eloquent, because of the way chaining methods are used, and how a lot of the Model logic is hidden from view. Laravel instead gives you other options, where the database connection isn’t mocked out at all: rows are inserted, but then rollback functionality is enabled via either Database Transactions or Migrations. You can read more about this in Laravel’s testing documentation.

First Test

class HomeControllerTest extends TestCase
{
   use \Illuminate\Foundation\Testing\DatabaseTransactions;
   use \Illuminate\Foundation\Testing\WithoutMiddleware;

   public function testIndex()
    {
      $this->call('GET', '/home');
      $this->assertViewHas('environments');
    }
}

Alright, sweet. One assertion gained 3% on our total test coverage. That’s great news for our golf game.

 

Note that this isn’t the test that I would recommend writing. It technically follows our rules, but it doesn’t really test much. This test runs the following method:

public function index()
{
    $parents = $this->environment->whereNull('parent_id')->get();
    return view('home', ['environments' => $parents]);
}

So this method does something (looks up some database records, and passes them into a view), but the test doesn’t really check that. It just checks that we passed an array key named environments to the view: Reading the function could have told you that.

This sort of test is why we say code coverage is a flawed metric. I ran all the lines in this function with my test, and I made an assertion that could be true or false, and the test passed. But I didn’t really test what the function did! Here’s a better version

public function testIndex()
{
   $id = factory(App\Environment::class, 'parent')->create()->id;
   $response = $this->call('GET', '/home');
   $this->assertViewHas('environments');
   $data = $response->getOriginalContent()->getData();
   $hasEnv = false;
   foreach ($data['environments'] as $environment) {
      if ($environment->id == $id) {
         $hasEnv = true;
         break;
      }
   }
   $this->assertTrue($hasEnv, "Created Environment Not Found");
}

The factory at the top creates a “parent” environment, meaning an environment at the top level of the hierarchy. I built that in the ModelFactory, which you can read the details on in the Laravel testing document I linked to. We then call the function, and instead of just checking if a key exists, we look for the object I created. This at least tells us that our query worked the way we expect, which is a better test.

Lets move on. Two more tests give me the next completed function.

public function testAddEnvironment()
{
   /** @var \Illuminate\Http\JsonResponse $response */
   $response = $this->call('POST', '/environment/', ['name' => 'home_controller_test_env']);
   $data = json_decode($response->content(), true);
   $this->assertTrue($data['success']);
}

public function testAddEnvironmentDuplicate()
{
   $name = factory(App\Environment::class, 'parent')->create()->name;
   $response = $this->call('POST', '/environment/', ['name' => $name]);
   $data = json_decode($response->content(), true);
   $this->assertFalse($data['success']);
}

This gives me 100% test coverage over the function, but it’s really an incomplete test again. I’d rather have more assertions, to prove that I have some idea of what’s going on. Golf is hard. I’d rather do this:

public function testAddParentEnvironment()
{
   /** @var \Illuminate\Http\JsonResponse $response */
   $response = $this->call('POST', '/environment/', ['name' => 'home_controller_test_env']);
   $this->seeInDatabase('environments', ['name' => 'new_environment', 'parent_id'=>null]);
   $data = json_decode($response->content(), true);
   $this->assertArrayHasKey('success', $data);
   $this->assertTrue($data['success']);
   $this->assertArrayHasKey('id', $data);
}

public function testAddParentEnvironmentDuplicate()
{
   $name = factory(App\Environment::class, 'parent')->create()->name;
   /** @var \Illuminate\Http\JsonResponse $response */
   $response = $this->call('POST', '/environment/', ['name' => $name]);
   $this->seeInDatabase('environments', ['name' => 'new_environment', 'parent_id'=>null]);
   $data = json_decode($response->content(), true);
   $this->assertArrayHasKey('success', $data);
   $this->assertTrue($data['success']);
   $this->assertArrayNotHasKey('id', $data);
   $this->assertArrayHasKey('error', $data);
   $this->assertEquals('Name Must Be Unique', $data['error']);
}

This pattern continues. If you don’t care about actually testing, getting test coverage is surprisingly easy. I have actually found some unreachable parts of my code, as well, which I wasn’t quite expecting.

So, 11 tests later, 11 assertions, and some carefully placed “@codeCoverageIgnore” flags (to ignore generated files that deal with basic Authentication for this project), and I’m pretty far along!

I created a “Cleanup” test case that will handle weird edge cases. In it I finished up the rest of the “Encryptable” lines, which had to do with making Decrypt throw an error. I also added the one method in Variable that doesn’t get called directly. Here’s that test:

public function testGetVariableEnvironment()
{
   $var = factory(App\Variable::class, 'parentVar')->create();
   $this->assertEquals($var->environment_id, $var->environment->id);
}

public function testFailToDecrypt()
{
   $val = 'test';
   $var = factory(App\Variable::class, 'parentVar')->create(['value' => $val]);
   \Illuminate\Support\Facades\Crypt::shouldReceive('decrypt')->andThrow(\Illuminate\Contracts\Encryption\DecryptException::class);
   $this->assertNotEquals($val, $var->value);
}

Then I aimed at the commands. Oddly enough, when testing one of the commands (I started with CreateEnv), It marked all of the commands as 100% complete. I think there’s an error somewhere between Laravel, PHPUnit, and PHPStorm that’s causing that, but I’m still going to finish up testing the rest of the commands.

Finishing Up

I was able to get all the code paths covered on my commands in the project, and thus consider this experiment completed. All in all, I was able to get 100% code coverage for every piece of code I wrote for this project in 27 assertions, and 277 lines of code.

I’ve checked everything in to the branch named “TestingGolf”, if you want to check out the project.

So What Did We Learn?

As developers, we are pretty aware that code coverage isn’t a very good metric. I mean, the tests I have written here barely check any output: it’s just the bare minimum. If you’re working for a company that uses code coverage as a completion metric, maybe forward this article up the chain? Couldn’t hurt.

But honestly after taking the past couple hours and doing this, I’ve learned that code coverage is something. While my tests are admittedly not very good in order to help the metrics I defined, I did actually find a couple of bugs doing this. I found some unreachable code that I removed, and a bad command definition that made my test not run correctly. Those are bugs that I probably wouldn’t have found without some sort of testing, and trying to be some definition of thorough truly helped the project.

But I think most of all, we learn that as developers, project managers, and even executives and sales people, you will get what you measure. I set out to get 100% code coverage: that’s what I measured myself by, and I got exactly that: 100% code coverage. I think this ends up being an important lesson to not just measure what’s easy, but what’s important to you.

One Comment

Add a Comment

Your email address will not be published. Required fields are marked *