Tag Archives: mocha

BDD with MEAN – The Server Part 1

As with any new endeavor, it pays to spend some time trying various solutions out and sometimes failing miserably. This is especially true for us progressive nerds who like to live on the bleeding edge without things like Stack Overflow to constantly save our ass. What I’d like to do is to help you avoid going through the pain of figuring out what works and what doesn’t.

As I mentioned in my previous post, I already have a project that serves as a working example of if you wish to jump straight into the code: https://github.com/zpratt/thoughtsom . All of the gists used in this post were pulled from that project.

The first step on our journey to effective BDD testing with the MEAN stack will be to start wiring up the various tools we’ll need to use to get a working environment. Afterwards, we’ll build out a helper to manage our test environment and fixtures.

Let’s start by reviewing our toolbox:

Our Tool Box

  • Grunt
    • Grunt is used for general task running, . It can be used in a similar manner to how you might use rake when working on a ruby-based project. It is particularly useful on the server side when you combine it with watch and nodemon, which are more runtime oriented.
  • Yadda
    • As mentioned earlier, Yadda will be serving as our BDD layer/framework. Yadda itself needs to integrate with another test framework like Mocha or Jasmine to provide a complete stack. It includes a Gherkin parser, multilingual support, and parameterized step definitions.
  • Mocha
    • Mocha is a general purpose javascript test framework. Mocha will be used more explicitly on the TDD side, but Yadda’s integration with it allows you to make use of the Before, BeforeEach, After, and AfterEach hooks to setup and tear down your test fixtures.
  • Sinon
    • Is a standalone test library that has an great support for stubing, spying, mocking, and additional assertions that compliment your regular xUnit style assertions when TDD’ing. Sinon also provides a fakeServer API that can be useful for prototyping endpoints or as a test double for external dependencies in BDD tests.
  • Chai
    • Provides both bdd style assertions (expect and should) as well as xUnit style assertions.
  • grunt-mocha-cov
    • A grunt plugin for running mocha tests. Since we’ve chosen to use mocha as the underlying test framework for Yadda, this will help automate the execution of our BDD tests.
  • Supertest
    • Used to test our express routes. Supertest has the ability to consume an express app and it’s associated routes. One of the things I especially like about Supertest is that it does not require you to have previously spun up an instance of your application in order to execute the tests.
  • Proxyquire
    • Allows us to inject stubs or fake objects into the require namespace.
  • Casual
    • A robust library for generating test data.
  • node-config
    • require paths and database connection information quickly gets messy. I like to extact some of that mess out into config files.
  • Yaml
    • I prefer yaml config files because they are short and sweet. This module works nicely with the node-config module.

Folder structure

The next most logical step is to establish our folder structure. I have a specific folder structure that I like to use and I am not currently aware of a yeoman generator that will scaffold anything based on my requirements, so for now I recommend creating the following stucture:

|-- config
 |-- Gruntfile.js
 |-- LICENSE
 |-- package.json
 |-- README.md
 |-- src
 |   |-- server
 |   |   |-- app
 |   |   |   `-- controllers
 |   |   |-- models
 |   |   |-- repositories
 |   |   `-- schemas
 `-- test
     |-- acceptance
     |   -- server
     |       |-- features
     |       |   |-- step_definitions
     |       |   `-- support
     |-- helpers
     |   -- stubs
     `-- unit
         |-- server
         |   |-- app
         |   |   -- controllers
         |   `-- repository

The configuration file – config/default.yml

The configuration file is pretty straightforward, for the most part. I like to establish some path prefixes to avoid the 3rd level of require hell, which is crazy relative path references. The one cryptic part might be the knownObjectId key, which I will explain later in the section on world.js.

The Gruntfile

The most important part of this is the mochacov configuration block. I am well aware of the simple mocha plugin, however, I have found that mochacov appears to be more active and it also gives much better output when things go wrong (which happened to me quite often while I was experimenting with how to get this all configured). Aside from my usage of mochacov, I have chosen to run jshint against both the production code and the tests. I feel that this is a good practice, which helps ensure consistency between code styles of production code and test code, as well as a safety net to help avoid tricky syntactic boogers.. I mean issues that are not always obvious.

The World

One of the useful things I learned from my cucumber-js experiment is the value of having a helper that can aid in setting up and tearing down the test environment and related fixtures. I have chosen to borrow this concept for usage with Yadda/Mocha as well. The world.js file should go in the following directory: test/acceptance/server/features/support/world.js

There are 3 main highlights in the world.js helper:

  1. It handles connecting and disconnecting from our local mongodb instance with connectToDB(done) and disconnectDB(done).
    • Note that both of these take done as a parameter, which is the callback that mocha will use to determine when an asynchronous function is truly finished.
  2. It handles inserting test data for us, with at least one known ObjectId with createThought(done)
    • This method currently only inserts one row of data into the datastore, however, the important part is that it inserts a record with a known primary key that we can then use in our route test to make sure that we can actually return a result when we hit the /thought/:id route. The value is pulled from our knownObjectId key in our default.yml configuration file.
  3. It clears the database (with clearDB(done)), which means that each of our test runs are idempotent.

Summary

As I have started to dig into this, I have realized that I could essentially write a book on this topic. If you’re anxious to take off with this and looking for more examples, I encourage you to take a look at my thoughtsom repo on github, which I continue to build out and try to commit chunks to each week. Getting the test environment and project structure right at the beginning will help us stay organized and have a solid configuration base to build on top of.

I plan on breaking this up into separate posts in hopes to make this a complete, but consumable reference.

Here’s a rough sketch of what to expect:

  1. Part 1 (this post): cover the basic libraries we’ll use, and establish the prerequisites for automating our test runs.
  2. Part 2: Write our first feature and failing step definition using Supertest to verify an express route.
  3. Part 3: A little application architecture and more advanced usages, such as multiple step definitions and features.
  4. Part 4: Unit test overview: stubs and mocks with sinon

After these 4 parts, I plan to begin covering the UI side of things and demonstrate how most of the tools will translate to being used on the both the server and the browser sides of testing. I welcome any feedback on the quality of this article and the direction you’d like to see this take. Feel free to flame/praise me on twitter.

TDD and BDD With The MEAN Stack: Introduction

As the MEAN stack is growing in adoption, a variety of testing strategies have sprouted up on the interwebs. As I have started to dig deeper into automated testing strategies for both sides of the MEAN stack, I have found it difficult to find advice or material on how to setup an environment to support the mockist style of test-driving the implementation. Many articles offer some light recommendations, but are typically more classicist-oriented. I’d like to help fill the gap by outlining a tooling strategy which I believe enables the following:

  • Automation
  • Mockist unit testing
  • BDD/ATDD testing
  • End-To-End system testing

In case you want to dive straight into the code, I’ve set up a project on github to prove out the technology.

I’m going to assume you’ve already got a working MongoDB and Node.js environment. If not, take a look at a vagrant solution or mean.io.

One of my goals has been to try to find tools that can work on both the server and the browser. I also put a lot of stock into automation, so I have aimed to avoid tools that require manual loading of browser pages to launch the tests. This has proven to be quite a challenge, but the xUnit-side of testing is much closer to that goal than the BDD-side. However, all hope is not lost.

Let’s get down to business and outline the tools that are working well on both sides:

  • Grunt – The task runner
  • Mocha – The test framework
  • Chai – The assertion library
  • Sinon – The stub, spy, and mock swissarmy knife

Some tools are sort of in the middle, where they can be made to work in both the browser and the server, but they aren’t rock solid in one or the other:

  • Yadda – BDD layer that works nicely with Mocha+Chai (needs a little help on the browser side, but I’m working on that)
  • Cucumber-js – (BDD) works great on the server-side, but fails in the browser side (particularly because of a lack of support for karma integration)

And finally, some tools are targeted at one side or the other:

  • Karma – THE test runner for the browser
  • karma-browserifast – A browserify plugin for Karma, which is needed to automate running Yadda BDD tests in the browser
  • Supertest – Awesome library for testing Express-based RESTful endpoints
  • CasperJS – Drive user interactions through the UI (and works with Yadda)

There is more to come. I plan to write two more articles to dig further into to the details of what it means to test drive on the server side and in the browser. For now, take a look at my Yadda/Karma example on Github for BDD in the browser and thoughtsom for BDD and unit testing on the server. I plan to build out of a few more code examples on the browser side of testing before the next set of articles. In the mean time, I welcome your feedback.

P.S. I am aware of Protractor, but I have not had much of a chance to experiment with it yet.

Setting up a project using karma with mocha and chai

Recently, I was on a quest to evaluate the mocha javascript testing framework
and its direct support for async testing in the browser. I had a hard time finding a complete description of everything that needed to be done in order to get things working with karma for testing in the browser as opposed to testing a node module.

Consequently, I decided it would be useful to create a guide for those who have similar interests to save everyone time. If you want to skip the steps and just download the code, feel free to clone my github repository. As a quick aside, I decided to try mocha thanks to some frustrations with jasmine. Async testing with jasmine can be done, but it does not have first-class support in my opinion. I intend to give buster a spin in the near future as well.

This tutorial assumes that you are on a unix-based system (I used OSX to write the article) and already have a working nodejs and npm installation, therefore installing and configuring a node.js/npm environment is outside the scope of this article. With that said, let’s dig in!

  1. Create a directory to hold the project and navigate to that directory:
    • mkdir <new_dir>
    • cd <new_dir>
  2. We’ll be installing a bunch of npm modules, so let’s run npm init to build our package.json file for us
  3. Let’s install karma with npm. I recommend installing it locally, since it’s likely that you’ll be either executing karma from an IDE (such as webstorm) or as a grunt task. Additionally, one project may be configured for one version of karma while a different project can use a different version if needed.
    • npm install karma --save-dev
    • At the time I wrote this article, the latest stable version of karma was 0.10.8.
  4. Now we’re ready to install mocha:
    • npm install mocha --save-dev
  5. We’ll also need the karma-mocha adapter so we can use mocha as a test framework with karma:
    • npm install karma-mocha --save-dev
  6. An assertion library will need to included as well. I prefer the chai assertion library, since it offers both bdd style assertions using expect and should as well as junit-style assertions.
    • npm install karma-chai --save-dev.
    • Installing chai this way will make it available as a framework within karma, which makes setup much simpler.
  7. I also find sinon to be extremely useful for stubbing and spying, so let’s install it to round out our set of packages required to get our test environment provisioned:
    • npm install karma-sinon --save-dev

At this point, our package.json should have a devDependencies directive that looks something like this:

devDependencies:

We can now create our karma configuration and write some tests. Let’s first make sure that our karma installation is working correctly:

  • ./node_modules/karma/bin/karma --version

That should print out the version number to the console if everything is working correctly. If you want to, you can initialize a karma configuration file using:

  • ./node_modules/karma/bin/karma init

I’ll provide the boilerplate so you can avoid that step. Include the following configuration in a file named

karma.conf.js:

Once that file is ready, we can create a failing test to ensure our configuration is working as expected. Create a new file in src/test called test.spec.js. Create the directory:

  • mkdir -p src/test

And a test file with the following contents:

describe("A test suite", function() {
   beforeEach(function() { });
   afterEach(function() { });
   it('should fail', function() { expect(true).to.be.false; });
});

With our test spec in place, we can run karma and make sure that we can successfully execute failing tests. ./node_modules/karma/bin/karma start karma.conf.js

After running our tests, we should see some output along the lines of

A test suite should fail FAILED

If so, then you’ve successfully configured karma with mocha, chai and sinon. Cheers!

Update: 07/14/14

I recently updated this to use the new 0.12.x version of karma and also added an example Gruntfile.js