martes, diciembre 08, 2015

Continuous Delivery with Jenkins, Docker and Ansible

Continuous Delivery with Jenkins, Docker and Ansible

I like the DevOps philosophy as I think that every developer must have a broad knowledge of the ecosystem around him/her. In this article I will show how I tested a Continuous Delivery ecosystem for my personal FullStack projects.
Warning: This is not a tutorial and is not definetely easy for people who doesn’t have a initial understanding con Continuous Integration with Jenkins, Docker or Ansible (althought this last could be covered with knowledge in Puppet, Chef or any other orchestator).
More Warning I deserve some pain because I’m not concerned about security in this article. Yes, I know, I’ll drink beer until I get a hangover as punishment xD
It’s going to be fragmented in 3 parts:
  • Part one: The Nodejs project and making a Docker with it in our machine
  • Part two: Automating redeploy of the Docker container in a remote machine
  • Part three: Redeploy using Jenkins and Ansible when our master branch has changed

Part one: The Nodejs project and making a Docker with it in our machine

The Nodejs Project

I have simplified the project to the maximum and it’s going to be a very simple Node project that simply shows a message on the screen on plain text. The “projet” is available inhttps://github.com/sayden/simplest-express-server

Installing Docker on our machine

First we have to install Docker in our machine for the first tests:
# Red had based distribution
sudo yum install -y docker

# Debian based
sudo apt-get install -y docker
We’ll use an image from docker.io called docker.io/node:4 which refers to the 4.2.3 LTS version. The Dockerfile has this “base” container and will simply git clone a repo an expose the 3000 port (the one in use for this specific “app”)
FROM    node:4

# Arguments
ENV REPO="https://github.com/sayden/simplest-express-server.git"
ENV DEST=/srv/node-server
ENV APP=${DEST}/server.js

# Ensure git is installed
RUN apt-get install -y git

# Clone the github repo
RUN git clone ${REPO} ${DEST}

# Go to cloned folder
RUN cd ${DEST} && npm install

# Expose app port
EXPOSE 3000

# Launch app
CMD ["sh", "-c", "cd ${DEST} && npm start"]

Building Docker image

So let’s build the image.
# As root
docker build -t mariocaster/node-server .
Take a look at the last “.” in the command as it points to the folder with the previous Dockerfile

Running Docker image

Once we have the image build and we can see it using sudo docker images it’s time to run it
# As root
docker run -d -p 41600:3000 mariocaster/node-server
With -p 41600:3000 we are telling Docker that if we access in our machine to the 41600 port it will redirect us to the 3000 in the container, the one with the node server

Part two: Automating redeploy of the Docker container in a remote machine

Installing Docker on remote host

I’m going to use a VirtualBox virtual machine with Centos 7 installed running on 192.168.1.39 in my case. This machine is going to be called THE_HOST.
First we need to get password-less access to the machine so I use ssh-copy-id to pass a public key in my ~/.ssh folder to THE_HOST. I have a handy bash script to gain access because I never remember the exact syntax:

BONUS: Gaining SSH access to the machine.

#!/bin/bash

# gain-ssh-access
echo -e "\n"

if [ "$1" = "-i" ] ; then
  echo "Using interactive mode"

  echo -e "Write the name of the remote user: \c"
  read user

  echo -e "Write the host Ip or name: \c"
  read host

  echo "A public key from ~/.ssh/id_rsa.pub will be used"
  echo "Remote machine will probably ask for permissions password"

  ssh-copy-id -i ~/.ssh/id_rsa.pub $user@$host
else
  echo "You can use interactive mode with -i flag"
  echo "Use ssh-copy-id command if not. example:"
  echo "ssh-copy-id -i ~/.ssh/id_rsa.pub user@host"
fi

echo -e "\n"
Once we have access, I have another “handy” Ansible Playbook to install Docker on THE_HOST. The script is the following:
# add-docker.yml
- host: all
  become: yes
  become_method: sudo

  tasks:
  - name: Add Docker
    yum: name=docker state=present

  - name: Add python-docker-py
    yum: name=python-docker-py state=present

  - name: Docker service must be started
    service: name=docker state=started

Of course, in my ansible hosts file I have the proper user and password configuration for the machine
# hosts

[local]
192.168.1.39      ansible_sudo_pass=osboxes.org    ansible_ssh_user=osboxes
So I can launch the command like this:
ansible-playbook -i hosts add-docker.yml

Adding the Dockerfile and launching it

We have an Ansible task to add the Dockerfile to THE_HOST. Then we have a Playbook that will copy it and build or restart the container.
- hosts: all
  become: yes
  become_method: sudo
  vars:
    # For handlers/restart-docker.yml
    http_port: 3000
    host_port: 41600

    # Shared here and in handlers/restart-docker.yml
    image_name: mariocaster/node-server

    # Dockerfile to use and destination in target's machine
    source_dockerfile_dir: /path/to/Dockerfile
    docker_dest_dir: /srv/docker

    git_repo: https://github.com/sayden/simplest-express-server.git

  tasks:
    - name: Copy Dockerfile to server
      copy: dest={{ docker_dest_dir }} src={{ source_dockerfile_dir }}

    - name: Re/build Docker image
      docker_image: name={{ image_name }}
                    path={{ docker_dest_dir }}
                    nocache=yes
                    state=build
      notify: Restart Docker image

  handlers:
    - name: Restart Docker image
      docker: name=node ports={{ host_port }}:{{ http_port }} image={{ image_name }} state=reloaded
Ok, very easy. Copy the Dockerfile, builds the image and restart (or start for the first time) the container.
If we access in our machine to 0.0.0.0:41600 we’ll see our server running.

Redeploying with one line

We can reuse our Ansible Playbook to redeploy the server as many times as we want, we must simply run the same Playbook again. You can try, if you are using your own git repo, to push some change and launch the same script again to see how “magically” changes.

Part three: Redeploy using Jenkins and Ansible when our master branch has changed

Ok, now we can redeploy as many times as we want. So now we must configure Jenkins to redeploy the container every time it finds (via polling) any change in the git repo.
There’s nothing very special in the Jenkins job. We’ll simply execute the redeploy Ansible Playbook. As there are thousands of tutorials about how to trigger Jenkins on Git push (via polling or git hook) I’ll not enter on it but I leave here a handy tutorial about it:http://www.nailedtothex.org/roller/kyle/entry/articles-jenkins-gittrigger
Also, you can use the Jenkins Ansible Plugin if you want https://wiki.jenkins-ci.org/display/JENKINS/Ansible+Plugin but I have to say that I feel very comfortable with bash scripts and I usually prefer to write my own scripts so here’s the one I used to redeploy the container:
export ANSIBLE_PLAYBOOKS=/var/local/jenkins

ansible-playbook -vvvv ${ANSIBLE_PLAYBOOKS}/rebuildDocker.yml -i ${ANSIBLE_PLAYBOOKS}/hosts
Needless to say that jenkins user must have ssh access too to THE_HOST as well as to the Ansible Playbook and Ansible hosts file
Notes about this last part: I don’t enter too deep into Jenkins configuration because, as I said at the beginning of the article, users that are trying to achieve Continous Delivery in their project must have some background knowledge in advance about some tools as this is not a “tutorial” about a tool but more like a “full solution” using various tools available (not necessarly the best).

miércoles, agosto 26, 2015

Is it a REST? Is it a SQL? No! It's GraphQL with MongoDB

Is it a REST? Is it a SQL? No! It's GraphQL with MongoDB

This is a project to show how to work with a Express app with GraphQL and MongoDB persistence… written in ES6 :)

How to install the example

$ git clone https://github.com/sayden/graphql-mongodb-example.git
$ npm install
$ gulp

Using it

For easyness, we will use Postman to make queries:
  • Asking for the user with ID 0 (actually, the position 0 on the user list for easyness)
query RootQuery {
    user (id:0) {
        name
      surname
    }
}
Gives
{
    "data": {
        "user": {
            "name": "Richard",
            "surname": "Stallman"
        }
    }
}
  • Asking for the name, surname, age and ID of user with ID 2
query RootQuery {
    user (id:2) {
        name
        surname
        age
        _id
    }
}
Gives
{
    "data": {
        "user": {
            "name": "Linux",
            "surname": "Torvalds",
            "age": 8,
            "_id": "55ddeec2a54c37e61e0a2120"
        }
    }
}
  • Adding a new user called Linus Torvalds of age 45 and getting the new info
mutation RootMutation {
    addUser (name: "Bjarne", surname:"Stroustrup", age:64) {
        name
        surname
        _id
        age
    }
}
Gives
{
    "data": {
        "addUser": {
            "name": "Bjarne",
            "surname": "Stroustrup",
            "_id": "55ddf61ed082460325e2b65c",
            "age": 64
        }
    }
}
Checking MongoDB:
{
    "name" : "Bjarne",
    "surname" : "Stroustrup",
    "age" : 64,
    "_id" : ObjectId("55ddf61ed082460325e2b65c"),
    "id" : "55ddf61ed082460325e2b65b",
    "__v" : 0
}

GraphQL

GraphQL is a new concept to define queries around a front end. It’s a mix between SQL and REST but the best way to understand it is through a example.

The example application

The application is pretty simple, uses an app.js where Express is getting configured and where it imports the Schema of the app.
Our only endpoint will be ‘/’. Soon you will see that we don’t need more.
We also have a ‘schema.es6’ that hold most of the GraphQL schema configuration. But first lets start with the models

Models folder

In the models folder is where most of the magic is happening.
When you open it, you will see a subfolder called User.
  • Every file ending in QL is related with GraphQL.
  • UserSchema.es6 is the Mongoose schema.
So, in any normal development we could have a Mongoose schema that we use to connect to our MongoDB instance. Nothing has change yet.

The concept of Query and Mutation

In GraphQL we are going to separate the actions of our API between Queries (they don’t alter the DDBB so they can be processed in parallel, typical GET in REST or SELECT * FROM… in SQL) and Mutations (they alter the database and they are processed serially, a POST, DELETE, PUT in REST or a DELETE FROM, INSERT INTO… in SQL)

4 files for every GraphQL “model”

This is a personal preference, to split the Model in 4 files as the could grow dangerously and I don’t like big (>1000 lines) files.
  • HobbyTypeQL.es6 -> This is what we could call GraphQL model where you establish the fields it has, their type (string, int…) and so on.
  • UserMutationsQL.es6 -> Here we will describe the mutations, the actions that can alter the database.
  • UserQueriesQL.es6 -> The queries against this model on the database, they can’t alter it.
  • HobbyQL.es6 -> A file to govern them all… I mean… A single point of entrance to the entire model.

User type file

The User type file is where we really define the properties of an model. We define what it is compose of but we aren’t defining yet what it can do.
So, for example, a typical User Type file could be like the following:
exports default new GrapqhQLObjectType({
    name: 'User',
    description: 'A user type in our application',
    fields: () => {
      _id:{
        type: new GraphQLNonNull(GraphQLID)
      },
      name:{
        type: new GraphQLNonNull(GraphQLString)
      },
      surname:{
        type: new GraphQLNonNull(GraphQLString)
      },
      age: {
        type: GraphQLInt
      }
    }
  });
  1. We define a name for the type so it can be recognized through the entire schema and in our calls
  2. We define a description in case we ask (through a http call) to know information about the exposed schema (we will cover how to do this later).
  3. And we define fields as properties of the model:
  4. _id as a unique ID (GraphQLID) in the DDBB,
  5. name
  6. surname
Really really simple, isn’t it?

User Queries file

In the User Queries, we will define the type of operations that can ask for information to our persistence layer (our database) but cannot modify the database.
export default {
  user: {
    type: UserType,
    args: {
      id: {
        type: GraphQLID
      }
    },
    resolve: (root, {id}) => {
      return new Promise((resolve, reject) => {
        //User is a Mongoose schema
        User.find({}, (err, res) => {
          // Actually, we are not searching the ID but returning the position in the iterator
          err ? reject(err) : resolve(res[id]);
        });
      });
    }
  }
};

  1. We define the type of object we will query. Here ‘user’ means that we will ask for a response like{user:"username"} when we make a query like query Query { user }
  2. type We have to define a type for the returning object. In this case is the UserType that we have defined previously.
  3. args Arguments for the query, in this case we have defined an id argument. So our query could bequery UserQueries { user (id:1) } to ask for the id 1 of the database.
  4. resolve This was the most difficult part to understand for me. Resolve is the function to execute in your system to retrieve the queried object. It always has a root param and the second param, that are arguments. Resolve must also return a promise but I’m not sure if this is mandatory. In our case, resolve creates a Promise, makes a query using Mongoose and directly returns the result.

User Mutations file

Our mutations file will contain operations to execute serially that can alter our database. It’s very similar to the queries file:
export default {
  addUser:{
    type:UserType,
    args: {
      name:{
        name:'name',
        type:new GraphQLNonNull(GraphQLString)
      },
      surname:{
        name:'surname',
        type: new GraphQLNonNull(GraphQLString)
      },
      age: {
        name:'age',
        type: GraphQLInt
      }
    },
    resolve: (root, {name, surname}) => {
      //Creates a new Mongoose User object to save
      var newUser = new User({name:name, surname:surname});

      return new Promise((resolve, reject) => {
        newUser.save((err, res) => {
          err ? reject(err): resolve(res);
        });
      });
    }
  }
};
  1. We define an operation called addUser to add new users to the database.
  2. In args we defined the arguments that must be passed to execute the operation: name and surnameas mandatory and age as optional, this is achieved with the new GraphQLNonNull() object.
  3. resolve must also return a promise. Here we create a new Mongoose User object and save then returning a promise.

User QL file

Finally when defining models, we like to use a [Model]QL file that will hold all the information previously done.
import _UserType from './HobbyTypeQL.es6';
import _UserQueries from './UserQueriesQL.es6';
import _UserMutations from './UserMutationsQL.es6';

export const UserType = _UserType;
export const UserQueries = _UserQueries;
export const UserMutations = _UserMutations;
This is not mandatory at all, but structurally I liked more the approach of importing a unique object for every model in the next file, the schema.

The schema file

Schema is a bit more complex. We will join here all the models operations.
let RootQuery = new GraphQLObjectType({
  name: 'Query',      //Return this type of object
  fields: () => ({
    user: UserQueries.user,
    userList: UserQueries.userList
  })
});


let RootMutation = new GraphQLObjectType({
  name: "Mutation",
  fields: () => ({
    addUser: UserMutations.addUser
  })
});


let schema = new GraphQLSchema({
  query: RootQuery,
  mutation: RootMutation
});

export default schema;
  1. We create a GraphQLObjectType for queries, in this case called RootQuery and a mutation object called MutationQuery.
  2. We must give both a name (don’t know very well why yet because you don’t need to use it)
  3. Then you must add, as fields all the operations that we have defined previously. In our case we have given the same name to the operations in our queries and mutations file than here.
  4. Finally, we must create a GraphQLSchema object to add the query and mutation object.
We have our schema complete. Now we only have to expose it through an endpoint.

The Server

The server is a common Mongoose+Express server with a small modification:
app.use(bodyparser.text({type: 'application/graphql'}));

app.post('/', (req, res) => {
  //Execute the query
  graphql(schema, req.body)
    .then((result) => {
      res.send(result);
    });
});
  1. We must know that our GraphQL queries must come with the application/graphql Content-Type. We use body-parser to get the response.
  2. Then we define an endpoint in ‘/’ to receive all queries and mutations. This is completely different on how you would do it in RESTful.
  3. Finally, we call the graphql() function with out defined schema. Pretty simple.

Relay

You can see a more complex example of this using Relay here: https://github.com/sayden/relay-starter-kit

Contributions

Please feel free to help, specially with grammar mistakes as english is not my mother language and I learned it watching “Two and a half men” :)
Any other contribution must be on the road of simplicity to understand and to help others to learn GraphQL. Contributions must have a README file associated or to update this.

Source code can be found in http://github.com/sayden/graphql-mongodb-example