Category Archives: Testing

Continuous Integration with the MEAN Stack

Travis-CI has great support for continuous integration with the MEAN stack. It took some research in order to achieve a complete solution, which then caused some revamping of my initial config. I want to share my notes with my agile hacker friends so you can take advantage of this with your projects, both large and small. This article is meant to be a continuation of my previous articles on setting up a proper test environment for the MEAN stack. Some things in this article are contextual and may seem out of place unless you’ve read the previous articles:

  1. TDD and BDD With The MEAN Stack: Introduction
  2. BDD with MEAN – The Server Part 1

Using continuous integration for large scale projects is, in my opinion, a requirement. Some might debate its value for small/personal projects, however, I’d like to put that to rest as well.

For the non-believers

Some may think that CI for small scale, and/or personal projects, is overkill. However, I believe CI for small/personal projects provides the following benefits:

  1. Verification that your application can run in an isolated environment away your personal machine on standard infrastructure.
  2. Marketing and a certain level of assurance to potential consumers that you treat your tests as a first class concern.
    • Users can see first-hand that your tests run effectively, instead of just telling them to run npm test themselves after downloading your code.
    • It shows attention to the engineering process as well as an additional level of transparency.
  3. It is an enabler for Continuous Deployment. In fact, Travis-CI supports continuous deployment to several different environments out of the box.

I believe strongly in maintaining the application in an always shippable state. Having a proper CI strategy is one step on that path.

Key Elements

  • Externalized application configuration
  • Travis-CI config for public github projects

Application Configuration

I chose to use the node-config module with javascript format. There are two keys strengths to this module:

  1. It supports using javascript for the configuration format, which makes it easy to build reusable chunks of configuration by just exposing some simple functions to return parts of the configuration object literal.
  2. It maps the NODE_ENV environment variable to configuration files stored in the config directory in the root of your project by convention. This allows you to create specialized configurations for development, CI, production, etc.

I chose travisci as the name of my CI environment, which, as we’ll see later, is set as the exported value of the NODE_ENV environment variable in the .travis.yml config file.

The reusable parent configuration

travisci Environment Configuration

Travis-CI Configuration

The key ingredients to tying this configuration back to our node-config configuration file are the NODE_ENV and BUILD_DIR environment variables. With Travis-CI, you can export environment variables by declaring them under the following yaml block:

env:
global:

Since I’ve chosen to use Grunt to automate task execution for the project, and bower to manage front-end dependencies, these global modules must be installed during the before_script: hook. The other blocks (language, node_js, and services tell Travis-CI what sort of environment is needed. What’s really cool about this, is that we can go ahead and run our BDD tests in this environment as well since we have a mongodb instance available. This is one of my favorite features of Travis-CI.

Secret Sauce

I did some digging and discovered that Travis-CI clones your repo into $HOME/<username>, so the effective working directory will be $HOME/<username>/<cloned repo name>. However, in their environment variable documentation, it is recommended to not rely upon the value of the HOME environment variable. To work around this, I chose to hook into this in the configuration by using the pwd sub-shell command in the .travis.yml file as part of the value for BUILD_DIR. In the travisci.js configuration file, I then utilize the BUILD_DIR as the initial working directory. This is used to avoid having a bunch of relative path references in the require calls of the application.

Parting Thoughts

I’m quite pleased with this setup for the Travis-CI environment. I went ahead and included their build badges in my thoughtsom github repo. I intended to also include code coverage integration with coveralls, however, I ran into some issues with the blanket code coverage tool. I plan to give istanbul a spin, but I have not had time to properly set that up yet. The long-term goal is to get continuous deployment to OpenShift working.

BDD with MEAN – The Server Part 1

As with any new endeavor, it pays to spend some time trying various solutions out and sometimes failing miserably. This is especially true for us progressive nerds who like to live on the bleeding edge without things like Stack Overflow to constantly save our ass. What I’d like to do is to help you avoid going through the pain of figuring out what works and what doesn’t.

As I mentioned in my previous post, I already have a project that serves as a working example of if you wish to jump straight into the code: https://github.com/zpratt/thoughtsom . All of the gists used in this post were pulled from that project.

The first step on our journey to effective BDD testing with the MEAN stack will be to start wiring up the various tools we’ll need to use to get a working environment. Afterwards, we’ll build out a helper to manage our test environment and fixtures.

Let’s start by reviewing our toolbox:

Our Tool Box

  • Grunt
    • Grunt is used for general task running, . It can be used in a similar manner to how you might use rake when working on a ruby-based project. It is particularly useful on the server side when you combine it with watch and nodemon, which are more runtime oriented.
  • Yadda
    • As mentioned earlier, Yadda will be serving as our BDD layer/framework. Yadda itself needs to integrate with another test framework like Mocha or Jasmine to provide a complete stack. It includes a Gherkin parser, multilingual support, and parameterized step definitions.
  • Mocha
    • Mocha is a general purpose javascript test framework. Mocha will be used more explicitly on the TDD side, but Yadda’s integration with it allows you to make use of the Before, BeforeEach, After, and AfterEach hooks to setup and tear down your test fixtures.
  • Sinon
    • Is a standalone test library that has an great support for stubing, spying, mocking, and additional assertions that compliment your regular xUnit style assertions when TDD’ing. Sinon also provides a fakeServer API that can be useful for prototyping endpoints or as a test double for external dependencies in BDD tests.
  • Chai
    • Provides both bdd style assertions (expect and should) as well as xUnit style assertions.
  • grunt-mocha-cov
    • A grunt plugin for running mocha tests. Since we’ve chosen to use mocha as the underlying test framework for Yadda, this will help automate the execution of our BDD tests.
  • Supertest
    • Used to test our express routes. Supertest has the ability to consume an express app and it’s associated routes. One of the things I especially like about Supertest is that it does not require you to have previously spun up an instance of your application in order to execute the tests.
  • Proxyquire
    • Allows us to inject stubs or fake objects into the require namespace.
  • Casual
    • A robust library for generating test data.
  • node-config
    • require paths and database connection information quickly gets messy. I like to extact some of that mess out into config files.
  • Yaml
    • I prefer yaml config files because they are short and sweet. This module works nicely with the node-config module.

Folder structure

The next most logical step is to establish our folder structure. I have a specific folder structure that I like to use and I am not currently aware of a yeoman generator that will scaffold anything based on my requirements, so for now I recommend creating the following stucture:

|-- config
 |-- Gruntfile.js
 |-- LICENSE
 |-- package.json
 |-- README.md
 |-- src
 |   |-- server
 |   |   |-- app
 |   |   |   `-- controllers
 |   |   |-- models
 |   |   |-- repositories
 |   |   `-- schemas
 `-- test
     |-- acceptance
     |   -- server
     |       |-- features
     |       |   |-- step_definitions
     |       |   `-- support
     |-- helpers
     |   -- stubs
     `-- unit
         |-- server
         |   |-- app
         |   |   -- controllers
         |   `-- repository

The configuration file – config/default.yml

The configuration file is pretty straightforward, for the most part. I like to establish some path prefixes to avoid the 3rd level of require hell, which is crazy relative path references. The one cryptic part might be the knownObjectId key, which I will explain later in the section on world.js.

The Gruntfile

The most important part of this is the mochacov configuration block. I am well aware of the simple mocha plugin, however, I have found that mochacov appears to be more active and it also gives much better output when things go wrong (which happened to me quite often while I was experimenting with how to get this all configured). Aside from my usage of mochacov, I have chosen to run jshint against both the production code and the tests. I feel that this is a good practice, which helps ensure consistency between code styles of production code and test code, as well as a safety net to help avoid tricky syntactic boogers.. I mean issues that are not always obvious.

The World

One of the useful things I learned from my cucumber-js experiment is the value of having a helper that can aid in setting up and tearing down the test environment and related fixtures. I have chosen to borrow this concept for usage with Yadda/Mocha as well. The world.js file should go in the following directory: test/acceptance/server/features/support/world.js

There are 3 main highlights in the world.js helper:

  1. It handles connecting and disconnecting from our local mongodb instance with connectToDB(done) and disconnectDB(done).
    • Note that both of these take done as a parameter, which is the callback that mocha will use to determine when an asynchronous function is truly finished.
  2. It handles inserting test data for us, with at least one known ObjectId with createThought(done)
    • This method currently only inserts one row of data into the datastore, however, the important part is that it inserts a record with a known primary key that we can then use in our route test to make sure that we can actually return a result when we hit the /thought/:id route. The value is pulled from our knownObjectId key in our default.yml configuration file.
  3. It clears the database (with clearDB(done)), which means that each of our test runs are idempotent.

Summary

As I have started to dig into this, I have realized that I could essentially write a book on this topic. If you’re anxious to take off with this and looking for more examples, I encourage you to take a look at my thoughtsom repo on github, which I continue to build out and try to commit chunks to each week. Getting the test environment and project structure right at the beginning will help us stay organized and have a solid configuration base to build on top of.

I plan on breaking this up into separate posts in hopes to make this a complete, but consumable reference.

Here’s a rough sketch of what to expect:

  1. Part 1 (this post): cover the basic libraries we’ll use, and establish the prerequisites for automating our test runs.
  2. Part 2: Write our first feature and failing step definition using Supertest to verify an express route.
  3. Part 3: A little application architecture and more advanced usages, such as multiple step definitions and features.
  4. Part 4: Unit test overview: stubs and mocks with sinon

After these 4 parts, I plan to begin covering the UI side of things and demonstrate how most of the tools will translate to being used on the both the server and the browser sides of testing. I welcome any feedback on the quality of this article and the direction you’d like to see this take. Feel free to flame/praise me on twitter.

TDD and BDD With The MEAN Stack: Introduction

As the MEAN stack is growing in adoption, a variety of testing strategies have sprouted up on the interwebs. As I have started to dig deeper into automated testing strategies for both sides of the MEAN stack, I have found it difficult to find advice or material on how to setup an environment to support the mockist style of test-driving the implementation. Many articles offer some light recommendations, but are typically more classicist-oriented. I’d like to help fill the gap by outlining a tooling strategy which I believe enables the following:

  • Automation
  • Mockist unit testing
  • BDD/ATDD testing
  • End-To-End system testing

In case you want to dive straight into the code, I’ve set up a project on github to prove out the technology.

I’m going to assume you’ve already got a working MongoDB and Node.js environment. If not, take a look at a vagrant solution or mean.io.

One of my goals has been to try to find tools that can work on both the server and the browser. I also put a lot of stock into automation, so I have aimed to avoid tools that require manual loading of browser pages to launch the tests. This has proven to be quite a challenge, but the xUnit-side of testing is much closer to that goal than the BDD-side. However, all hope is not lost.

Let’s get down to business and outline the tools that are working well on both sides:

  • Grunt – The task runner
  • Mocha – The test framework
  • Chai – The assertion library
  • Sinon – The stub, spy, and mock swissarmy knife

Some tools are sort of in the middle, where they can be made to work in both the browser and the server, but they aren’t rock solid in one or the other:

  • Yadda – BDD layer that works nicely with Mocha+Chai (needs a little help on the browser side, but I’m working on that)
  • Cucumber-js – (BDD) works great on the server-side, but fails in the browser side (particularly because of a lack of support for karma integration)

And finally, some tools are targeted at one side or the other:

  • Karma – THE test runner for the browser
  • karma-browserifast – A browserify plugin for Karma, which is needed to automate running Yadda BDD tests in the browser
  • Supertest – Awesome library for testing Express-based RESTful endpoints
  • CasperJS – Drive user interactions through the UI (and works with Yadda)

There is more to come. I plan to write two more articles to dig further into to the details of what it means to test drive on the server side and in the browser. For now, take a look at my Yadda/Karma example on Github for BDD in the browser and thoughtsom for BDD and unit testing on the server. I plan to build out of a few more code examples on the browser side of testing before the next set of articles. In the mean time, I welcome your feedback.

P.S. I am aware of Protractor, but I have not had much of a chance to experiment with it yet.