Tag Archives: guide

React Components for Google Maps – Part 2

Making React and Google Maps play nice with each other is not a complex task, provided you have some insight in how to open seams in the google maps layer to allow it to interoperate with other libraries. I have learned this from the school of hard knocks, so I hope to shed some light on the tricky parts to save you some time. If you haven’t already, I recommend you read part 1 of this series for some important background information.

Let’s dig into the implementation. I have broken out the examples into a set of gists. If you don’t care to follow through each step, you can jump straight to the full example of the OverlayView.

Implementing the OverlayView

There are 3 high level requirements for creating a custom OverlayView. The module must:

  1. Prototypically inherit from the google.maps.OverlayView class
  2. Implement a onAdd method, which will be called once the map has been set for the overlay
  3. Implement a draw method, which will be called each time the map is zoomed

Secret sauce

The secret sauce to getting the overlay and React to play nice is to take the DOM element representing the content of the overlay as a parameter through the constructor. As we will see later on, this will allow us to handle rendering the content of the overlay in the React component, but position it on the map through the overlay.

Extend the google.maps.OverlayView class

Adding the overlay to the DOM

The onAdd method is the point at which the overlay can insert itself into the DOM. This is achieved by using the getPanes method, which is inherited from the parent OverlayView class. There are multiple panes, but for our particular case, we’ll use the overlayLayer pane. This will allow us to control any click-like behavior completely within the DOM structure of the overlay.

It’s important to understand the purpose of this method when integrating an overlay with a particular framework or library, because typically inserting a particular view into the DOM is often handled by the framework. This method is our seam that allows us to inject content into the map without having that content coupled to the map itself.

Positioning the overlay on the map

The draw method is invoked when the overlay needs to calculate its position on the map. In the simplest case, an overlay is positioned by a single lat/lng point, however, it is also possible to position the overlay by a bounds as well. We’ll focus on the simplest case in this article. In order to translate a lat/lng to a pixel, we need to get a reference to the projection of the overlay. Projections are google maps’ way of turning lat/lng points into pixels relative to something within the map structure.

To safely get a reference to a projection, we have to wait for the map instance to be idle. I have found it is best to handle this dependency outside the overlay, particularly if you plan on having to render hundreds or thousands of overlays on the map. I will discuss how to implement handling this externally as part of implementing the facade. Regardless of where it is handled, the way to accomplish knowing when the map is idle is by adding a listener to the map instance for the idle event.

Performance

From my experience, when rendering a high volume of overlays on the map (300+), the draw method has the highest likelihood to impact rendering performance. There are two reasons for this. First of all, relatively speaking, it is expensive to determine the dimensions of an element in the DOM. We need the dimensions of the overlay in order to properly position it on the map. Secondly, the draw method will be invoked each time we zoom, therefore, we will need the dimensions of our overlay again in order to correctly position it.

When rendering high volumes of overlays, we can dramatically improve performance by caching the dimensions. To take it a step further, if your overlays all have a consistent height and width, you can calculate the dimensions once and use the cached copy for all other instances. Exactly how to achieve that is outside the scope of this article, but I wanted to highlight it, because it has caused me a great deal of heartburn in the past.

Implementing the facade

Our general requirements for the facade are:

  1. Handle the creation of the google map instance (a factory)
  2. Provide a facility for consumers to know when the map instance is idle without having to add listeners themselves
  3. Provide a factory for creating our custom overlays

Map facade

The best way I have found to handle both items 1 and 2 is to define a method that will handle creating the map instance, but returns a Promise. The Promise is resolved once the map is idle, and should resolve with the newly created map instance. This allows consumers to bind to the Promise and to perform any actions that are dependent upon the map being idle. As consumers, we don’t care if we bind to the map early or late in its lifecycle, once it’s idle, we know our callback will be invoked and we’ll have access to an instance of the map.

Overlay factory

To avoid coupling ourselves to our custom overlay, it is best to implement a factory to handle creating instances of the overlay for us. This allows us to handle the setup of the overlay in a single place, and will improve the testability of modules that need to create an overlay instance.

Implementing the React component

By abstracting away the details of defining the overlay and its interactions with the map in our facade, there is very little overlay-specific logic that needs to be included in the component. All we need to do is call our factory. The key is to do this as part of the componentDidMount method of the component, since this is the point in time where we will have access to the DOM node of the component. You are free to handle the rendering of the content for the overlay in the render method of the component.

Secret sauce

The key to achieving this is to render the overlay component with a detached div element and not an element that is already in the DOM. This allows us to render our content in the detached element and to pass it to the overlay. The component can then safely share it’s DOM node with the overlay. Here’s an example of how to achieve this with my idle-maps module.

Parting Thoughts

Where I work, we have a very map-centric application where, at times, we render 2,000+ overlays. We also have a variety of types of overlays that we leverage. Following the patterns I have laid out has proven to be very effective for us. While building out this capability within our application, it would have been quite valuable to have known some of these patterns early on. I hope by documenting them here, I can save another developer some time and pain going down this path.

React Components for Google Maps – Part 1

Building React components for use with google maps can present some challenging problems for those who aren’t aware of the basics/quirks of developing with google maps. I aim to shed some light on the fundamentals of how to get the two to play nicely together, and to encourage developing the components in a way that will promote testability. As always, feel free to jump right into the code if you don’t need the TL;DR description of how it all works.

As you’ll notice in the example, I have created two npm modules to support this:

  • async-google-maps – provides a facade for asynchronously creating google map instances as well as a base overlay views that are positioned by a lat/lng
  • idle-maps – builds on top of the async-google-maps module, by providing a set of React components for creating google map instances and overlay content

Why?

The standard google maps modules are fine for simple use cases, such as displaying a marker for a given Lat/Lng point or InfoWindows for descriptive content about a given point. However, things get interesting when you need to be able to have full control over the style of a component, or to easily update content based on changes from the server. There are third-party libraries out there that will wrap the google maps primitives, but those have their own problems and often contain a bunch of bloat trying to handle all possible user interactions and browser quirks.

Prerequisites

  1. A basic understanding of spatial data primitives: coordinate systems, projections, basic geometric data types
  2. A basic understanding of React
  3. A basic understanding of google maps
  4. A basic understanding of browserify
  5. A google maps API key

Where we’re going

As an introductory step, we will start by building a custom OverlayView that is positioned by a point (as opposed to being positioned by a bounds, which is also possible). We will keep this simple by focusing on the basics using React with google maps and avoid fetching data from the server or binding UI interactions to a client-side router. We’ll cover those in later articles.

Separating concerns

My preferred method for separating concerns when building custom OverlayViews is to leverage the following pattern:

  • Build a constructor function which inherits from the google.maps.OverlayView constructor.
  • Implement a facade, (in the spirit of the Command Pattern) which abstracts away the complexity of the coordination that needs to occur between the OverlayView and the React component
  • Implement the React component in such a way that it knows nothing about any of the google maps APIs

What’s next?

In the next article, we will take a deep dive into the implementation and highlight a few of the pitfalls I discovered while working on this myself.

Continuous Integration with the MEAN Stack

Travis-CI has great support for continuous integration with the MEAN stack. It took some research in order to achieve a complete solution, which then caused some revamping of my initial config. I want to share my notes with my agile hacker friends so you can take advantage of this with your projects, both large and small. This article is meant to be a continuation of my previous articles on setting up a proper test environment for the MEAN stack. Some things in this article are contextual and may seem out of place unless you’ve read the previous articles:

  1. TDD and BDD With The MEAN Stack: Introduction
  2. BDD with MEAN – The Server Part 1

Using continuous integration for large scale projects is, in my opinion, a requirement. Some might debate its value for small/personal projects, however, I’d like to put that to rest as well.

For the non-believers

Some may think that CI for small scale, and/or personal projects, is overkill. However, I believe CI for small/personal projects provides the following benefits:

  1. Verification that your application can run in an isolated environment away your personal machine on standard infrastructure.
  2. Marketing and a certain level of assurance to potential consumers that you treat your tests as a first class concern.
    • Users can see first-hand that your tests run effectively, instead of just telling them to run npm test themselves after downloading your code.
    • It shows attention to the engineering process as well as an additional level of transparency.
  3. It is an enabler for Continuous Deployment. In fact, Travis-CI supports continuous deployment to several different environments out of the box.

I believe strongly in maintaining the application in an always shippable state. Having a proper CI strategy is one step on that path.

Key Elements

  • Externalized application configuration
  • Travis-CI config for public github projects

Application Configuration

I chose to use the node-config module with javascript format. There are two keys strengths to this module:

  1. It supports using javascript for the configuration format, which makes it easy to build reusable chunks of configuration by just exposing some simple functions to return parts of the configuration object literal.
  2. It maps the NODE_ENV environment variable to configuration files stored in the config directory in the root of your project by convention. This allows you to create specialized configurations for development, CI, production, etc.

I chose travisci as the name of my CI environment, which, as we’ll see later, is set as the exported value of the NODE_ENV environment variable in the .travis.yml config file.

The reusable parent configuration

travisci Environment Configuration

Travis-CI Configuration

The key ingredients to tying this configuration back to our node-config configuration file are the NODE_ENV and BUILD_DIR environment variables. With Travis-CI, you can export environment variables by declaring them under the following yaml block:

env:
global:

Since I’ve chosen to use Grunt to automate task execution for the project, and bower to manage front-end dependencies, these global modules must be installed during the before_script: hook. The other blocks (language, node_js, and services tell Travis-CI what sort of environment is needed. What’s really cool about this, is that we can go ahead and run our BDD tests in this environment as well since we have a mongodb instance available. This is one of my favorite features of Travis-CI.

Secret Sauce

I did some digging and discovered that Travis-CI clones your repo into $HOME/<username>, so the effective working directory will be $HOME/<username>/<cloned repo name>. However, in their environment variable documentation, it is recommended to not rely upon the value of the HOME environment variable. To work around this, I chose to hook into this in the configuration by using the pwd sub-shell command in the .travis.yml file as part of the value for BUILD_DIR. In the travisci.js configuration file, I then utilize the BUILD_DIR as the initial working directory. This is used to avoid having a bunch of relative path references in the require calls of the application.

Parting Thoughts

I’m quite pleased with this setup for the Travis-CI environment. I went ahead and included their build badges in my thoughtsom github repo. I intended to also include code coverage integration with coveralls, however, I ran into some issues with the blanket code coverage tool. I plan to give istanbul a spin, but I have not had time to properly set that up yet. The long-term goal is to get continuous deployment to OpenShift working.

BDD with MEAN – The Server Part 1

As with any new endeavor, it pays to spend some time trying various solutions out and sometimes failing miserably. This is especially true for us progressive nerds who like to live on the bleeding edge without things like Stack Overflow to constantly save our ass. What I’d like to do is to help you avoid going through the pain of figuring out what works and what doesn’t.

As I mentioned in my previous post, I already have a project that serves as a working example of if you wish to jump straight into the code: https://github.com/zpratt/thoughtsom . All of the gists used in this post were pulled from that project.

The first step on our journey to effective BDD testing with the MEAN stack will be to start wiring up the various tools we’ll need to use to get a working environment. Afterwards, we’ll build out a helper to manage our test environment and fixtures.

Let’s start by reviewing our toolbox:

Our Tool Box

  • Grunt
    • Grunt is used for general task running, . It can be used in a similar manner to how you might use rake when working on a ruby-based project. It is particularly useful on the server side when you combine it with watch and nodemon, which are more runtime oriented.
  • Yadda
    • As mentioned earlier, Yadda will be serving as our BDD layer/framework. Yadda itself needs to integrate with another test framework like Mocha or Jasmine to provide a complete stack. It includes a Gherkin parser, multilingual support, and parameterized step definitions.
  • Mocha
    • Mocha is a general purpose javascript test framework. Mocha will be used more explicitly on the TDD side, but Yadda’s integration with it allows you to make use of the Before, BeforeEach, After, and AfterEach hooks to setup and tear down your test fixtures.
  • Sinon
    • Is a standalone test library that has an great support for stubing, spying, mocking, and additional assertions that compliment your regular xUnit style assertions when TDD’ing. Sinon also provides a fakeServer API that can be useful for prototyping endpoints or as a test double for external dependencies in BDD tests.
  • Chai
    • Provides both bdd style assertions (expect and should) as well as xUnit style assertions.
  • grunt-mocha-cov
    • A grunt plugin for running mocha tests. Since we’ve chosen to use mocha as the underlying test framework for Yadda, this will help automate the execution of our BDD tests.
  • Supertest
    • Used to test our express routes. Supertest has the ability to consume an express app and it’s associated routes. One of the things I especially like about Supertest is that it does not require you to have previously spun up an instance of your application in order to execute the tests.
  • Proxyquire
    • Allows us to inject stubs or fake objects into the require namespace.
  • Casual
    • A robust library for generating test data.
  • node-config
    • require paths and database connection information quickly gets messy. I like to extact some of that mess out into config files.
  • Yaml
    • I prefer yaml config files because they are short and sweet. This module works nicely with the node-config module.

Folder structure

The next most logical step is to establish our folder structure. I have a specific folder structure that I like to use and I am not currently aware of a yeoman generator that will scaffold anything based on my requirements, so for now I recommend creating the following stucture:

|-- config
 |-- Gruntfile.js
 |-- LICENSE
 |-- package.json
 |-- README.md
 |-- src
 |   |-- server
 |   |   |-- app
 |   |   |   `-- controllers
 |   |   |-- models
 |   |   |-- repositories
 |   |   `-- schemas
 `-- test
     |-- acceptance
     |   -- server
     |       |-- features
     |       |   |-- step_definitions
     |       |   `-- support
     |-- helpers
     |   -- stubs
     `-- unit
         |-- server
         |   |-- app
         |   |   -- controllers
         |   `-- repository

The configuration file – config/default.yml

The configuration file is pretty straightforward, for the most part. I like to establish some path prefixes to avoid the 3rd level of require hell, which is crazy relative path references. The one cryptic part might be the knownObjectId key, which I will explain later in the section on world.js.

The Gruntfile

The most important part of this is the mochacov configuration block. I am well aware of the simple mocha plugin, however, I have found that mochacov appears to be more active and it also gives much better output when things go wrong (which happened to me quite often while I was experimenting with how to get this all configured). Aside from my usage of mochacov, I have chosen to run jshint against both the production code and the tests. I feel that this is a good practice, which helps ensure consistency between code styles of production code and test code, as well as a safety net to help avoid tricky syntactic boogers.. I mean issues that are not always obvious.

The World

One of the useful things I learned from my cucumber-js experiment is the value of having a helper that can aid in setting up and tearing down the test environment and related fixtures. I have chosen to borrow this concept for usage with Yadda/Mocha as well. The world.js file should go in the following directory: test/acceptance/server/features/support/world.js

There are 3 main highlights in the world.js helper:

  1. It handles connecting and disconnecting from our local mongodb instance with connectToDB(done) and disconnectDB(done).
    • Note that both of these take done as a parameter, which is the callback that mocha will use to determine when an asynchronous function is truly finished.
  2. It handles inserting test data for us, with at least one known ObjectId with createThought(done)
    • This method currently only inserts one row of data into the datastore, however, the important part is that it inserts a record with a known primary key that we can then use in our route test to make sure that we can actually return a result when we hit the /thought/:id route. The value is pulled from our knownObjectId key in our default.yml configuration file.
  3. It clears the database (with clearDB(done)), which means that each of our test runs are idempotent.

Summary

As I have started to dig into this, I have realized that I could essentially write a book on this topic. If you’re anxious to take off with this and looking for more examples, I encourage you to take a look at my thoughtsom repo on github, which I continue to build out and try to commit chunks to each week. Getting the test environment and project structure right at the beginning will help us stay organized and have a solid configuration base to build on top of.

I plan on breaking this up into separate posts in hopes to make this a complete, but consumable reference.

Here’s a rough sketch of what to expect:

  1. Part 1 (this post): cover the basic libraries we’ll use, and establish the prerequisites for automating our test runs.
  2. Part 2: Write our first feature and failing step definition using Supertest to verify an express route.
  3. Part 3: A little application architecture and more advanced usages, such as multiple step definitions and features.
  4. Part 4: Unit test overview: stubs and mocks with sinon

After these 4 parts, I plan to begin covering the UI side of things and demonstrate how most of the tools will translate to being used on the both the server and the browser sides of testing. I welcome any feedback on the quality of this article and the direction you’d like to see this take. Feel free to flame/praise me on twitter.

TDD and BDD With The MEAN Stack: Introduction

As the MEAN stack is growing in adoption, a variety of testing strategies have sprouted up on the interwebs. As I have started to dig deeper into automated testing strategies for both sides of the MEAN stack, I have found it difficult to find advice or material on how to setup an environment to support the mockist style of test-driving the implementation. Many articles offer some light recommendations, but are typically more classicist-oriented. I’d like to help fill the gap by outlining a tooling strategy which I believe enables the following:

  • Automation
  • Mockist unit testing
  • BDD/ATDD testing
  • End-To-End system testing

In case you want to dive straight into the code, I’ve set up a project on github to prove out the technology.

I’m going to assume you’ve already got a working MongoDB and Node.js environment. If not, take a look at a vagrant solution or mean.io.

One of my goals has been to try to find tools that can work on both the server and the browser. I also put a lot of stock into automation, so I have aimed to avoid tools that require manual loading of browser pages to launch the tests. This has proven to be quite a challenge, but the xUnit-side of testing is much closer to that goal than the BDD-side. However, all hope is not lost.

Let’s get down to business and outline the tools that are working well on both sides:

  • Grunt – The task runner
  • Mocha – The test framework
  • Chai – The assertion library
  • Sinon – The stub, spy, and mock swissarmy knife

Some tools are sort of in the middle, where they can be made to work in both the browser and the server, but they aren’t rock solid in one or the other:

  • Yadda – BDD layer that works nicely with Mocha+Chai (needs a little help on the browser side, but I’m working on that)
  • Cucumber-js – (BDD) works great on the server-side, but fails in the browser side (particularly because of a lack of support for karma integration)

And finally, some tools are targeted at one side or the other:

  • Karma – THE test runner for the browser
  • karma-browserifast – A browserify plugin for Karma, which is needed to automate running Yadda BDD tests in the browser
  • Supertest – Awesome library for testing Express-based RESTful endpoints
  • CasperJS – Drive user interactions through the UI (and works with Yadda)

There is more to come. I plan to write two more articles to dig further into to the details of what it means to test drive on the server side and in the browser. For now, take a look at my Yadda/Karma example on Github for BDD in the browser and thoughtsom for BDD and unit testing on the server. I plan to build out of a few more code examples on the browser side of testing before the next set of articles. In the mean time, I welcome your feedback.

P.S. I am aware of Protractor, but I have not had much of a chance to experiment with it yet.

Deploying ChiliProject on Tomcat

We have been using Redmine as our project management tool of choice at work for about a year and half. We use it primarily to manage our Pentaho and data warehouse implementation, along with some smaller initiatives. It meets our needs quite well by allowing us to host multiple projects (unlike Trac) with different needs on the same deployment. I love how flexible and easy it is to configure, however, I also like to keep my eyes open for potential alternatives. The last time I upgraded our Redmine installation, I decided to deploy Redmine on our Tomcat environment in an attempt to avoid having to maintain a apache+passenger install purely for Redmine (most of the apps we host are java based, so my Tomcat skills are much more polished than my apache+passenger skills).

I recently discovered ChiliProject, which is a fork of the Redmine project, and thought I’d take it for a spin. While Redmine project is very stable, it is a little slow to implement new functionality. That appears to be the primary reason for the ChiliProject fork. After browsing through their documentation and doing some google digging, I noticed that there is little to no information on how to deploy ChiliProject to Tomcat. I thought I’d take the time to try to fill the gap a little. It is my hope that this guide will serve as a reference for those of you who are working in primarily Tomcat-oriented shops, but are interested in test driving a high quality ruby-based app without having to setup the required apache+passenger (or nginx+passenger) infrastructure.

This guide assumes that you already have the following knowledge:

  • Basic shell skills
  • How to setup and secure a mysql server
  • How to setup and secure a tomcat server

I developed this guide on a Fedora 16 installation at home, but I am confident that it will work just as well with CentOS, Scientific Linux, or any other RHEL derivative. You can also swap the package names for your favorite non-RedHat Linux distribution.

Required software packages:

  • tomcat6
  • java-1.6.0-openjdk
  • mysql-server

Recommended software packages:

  • tomcat6-admin-webapps
  • tomcat6-webapps

Files to download:

Installing JRuby

A JRuby installation is only needed to provide an initial environment for configuring and testing
the web application. Once we deploy the chiliproject war to Tomcat, our JRuby installation will
only be needed for subsequent upgrades or installing and testing plugins. Consequently, I don’t
bother installing any official JRuby packages. Here is my barebones installation process:

  1. If you don’t already have a bin directory in your home directory, create one:
    cd ~/.
    mkdir bin
  2. Extract the JRuby tarball to your newly created bin directory:
    cd bin
    tar -xzf ~/Downloads/jruby-bin-1.6.5.1.tar.gz
  3. Add the JRuby bin directory to your path:
    cd jruby-1.6.5.1/bin
    export PATH=${PATH}:${PWD}

Initial ChiliProject install

  1. Go back to your bin directory and extract the chiliproject tarball:
    cd ~/bin/
    tar -xzf ~/Downloads/chiliproject-2.6.0.tar.gz
  2. chiliproject uses bundler to manage the gems it depends on, so let’s install bundler:
    cd ~/bin/chiliproject-2.6.0
    jruby -S gem install -r bundler
  3. The stock Gemfile assumes you’re going to run this on a native ruby installation. This causes a problem when it tries to install rmagick. Edit the Gemfile to replace rmagick with rmagick4j. Edit the following lines:
    group :rmagick do
      gem “rmagick”, “>= 1.15.17”
    to match:
    group :rmagick4j do
      gem “rmagick4j”, “>= 0.3.7”
  4. Install the required gems with bundler:
    jruby -S bundle install –without test development
  5. Create the chiliproject database:
    mysql -u root -h localhost -p
    create database chiliproject character set utf8;
    create user ‘chiliproject’@’localhost’ identified by ‘password’;
    grant all privileges on chiliproject.* to ‘chiliproject’@’localhost’;
  6. Copy the database.yml.example to database.yml
    cd ~/bin/chiliproject-2.6.0/config
    cp database.yml.example database.yml
  7. Edit the production section of the database.yml file with your favorite editor. The key change for a tomcat deployment is to make sure that the adapter property is set to jdbcmysql and not mysql. Here’s a copy of mine:
    production:
      adapter: jdbcmysql
      database: chiliproject
      host: localhost
      username: chiliproject
      password: password
      encoding: utf8
  8. Copy the configuration.yml.example to configuration.yml
    cp configuration.yml.example configuration.yml
  9. Generate the session store:
    jruby -S bundle exec rake generate_session_store
  10. Start mysql (as root):
    service mysqld start
  11. Define the application’s mysql objects using rake db:migrate:
    RAILS_ENV=production jruby -S bundle exec rake db:migrate
  12. Load the default installation data:
    RAILS_ENV=production jruby -S bundle exec rake redmine:load_default_data
  13. Test the initial installation on the rails webrick server before moving forward:
    cd ~/bin/chiliproject-2.6.0
    RAILS_ENV=production jruby -S script/server -e production

Additional steps for final tomcat deployment

  1. Install warbler:
    jruby -S gem install -r warbler
  2. Create a warbler configuration:
    jruby -S warble config
  3. Edit the warbler configuration
    cd ~/bin/chiliproject-2.6.0
    vim config/warble.rb
  4. Uncomment and change config.dirs to match the following:
    config.dirs = %w(app config lib log vendor tmp extra files lang)
  5. Uncomment and change config.gems to match the following:
    config.gems += [“activerecord-jdbcmysql-adapter”, “jruby-openssl”, “i18n”, “rack”]
  6. Uncomment and change config.jar_name to match the following:
    config.jar_name = “chili”
  7. Uncomment the runtimes configuration to match the following:
    config.webxml.jruby.max.runtimes = 4
  8. Install a specific version of the jruby-rack library:
    jruby -S gem install -r -v=1.0.9 jruby-rack
    jruby -S gem uninstall jruby-rack -v=1.1.3
    Note: I have had issues with the 1.1.x jruby-rack library. After deployment, you’ll get an exception complaining about a call to raw_post. The same error does not occur when using the 1.0.x version of jruby-rack, hence the need to install an older version.
  9. Build the war file:
    cd ~/bin/chiliproject-2.6.0
    jruby -S warble
  10. Prepare a folder for deploying the webapp (as root):
    cd /var/lib/tomcat6/webapps
    mkdir chili
  11. Extract the chili.war file to the new webapp directory (as root):
    cd /var/lib/tomcat6/webapps/chili
    unzip -q <user home>/bin/chiliproject-2.6.0/chili.war
  12. Set the proper permissions on the chili webapp directory (as root):
    cd /var/lib/tomcat6/webapps
    chown -R tomcat:tomcat chili/
  13. Start tomcat (as root):
    service tomcat6 start
  14. Start a browser and test your deployment by going to the following URL:
    http://localhost:8080/chili/

Other notes

For reference purposes, here are the gems that I installed at the time that I wrote this article (using jruby -S gem list –local):

*** LOCAL GEMS ***

actionmailer (2.3.14)
actionpack (2.3.14)
activerecord (2.3.14)
activerecord-jdbc-adapter (1.2.1)
activerecord-jdbcmysql-adapter (1.2.1)
activerecord-jdbcpostgresql-adapter (1.2.1)
activerecord-jdbcsqlite3-adapter (1.2.1)
activeresource (2.3.14)
activesupport (2.3.14)
bouncy-castle-java (1.5.0146.1)
bundler (1.0.21)
coderay (0.9.8)
fastercsv (1.5.4)
i18n (0.4.2)
jdbc-mysql (5.1.13)
jdbc-postgres (9.0.801)
jdbc-sqlite3 (3.7.2)
jruby-jars (1.6.5.1)
jruby-openssl (0.7.4)
jruby-rack (1.0.9)
json (1.6.5 java)
net-ldap (0.2.2)
rack (1.1.3)
rails (2.3.14)
rake (0.9.2.2, 0.8.7)
rdoc (3.12)
rmagick4j (0.3.7)
ruby-openid (2.1.8)
rubytree (0.5.3)
rubyzip (0.9.5)
sources (0.0.1)
warbler (1.3.2)

Sources

02/13/12 Update:

I have tested my steps against the new 3.0 release of Chiliproject along with jruby 1.6.6 and it works just fine. You’ll still need to make sure you use a downgraded version of jruby-rack. It appears the latest 1.0.x release is 1.0.10 as per https://github.com/jruby/jruby-rack/tags.