Archive

Archive for the ‘Devops’ Category

Load Balancing With NGinX

May 7, 2018 Leave a comment

Why we need load balancer ? can we just create one very stable server which is the failure rate zero and when we need to implement this load balancer ?

Load balancing at least has two main purposes :

  1. Increasing fault tolerance
  2. Distributing load

Increasing fault tolerance means ensuring and increasing uptime and reliability current application or system. This is very critical for such services which sale high availability of their services which is once downtime happened exceeding the tolerance limit, will cause big trouble and endanger their business continuity. Nowadays, for example when services like Facebook, or whatsapp down, users which are rely on FB and Whatsapp as their backbone business, will lose trust and if this happened frequently will abandoned FB and Whatsapp and worst case will move to competitor services.

Second purposes of load balancing is distributing load. This load distribution make scaling (Horizontal Scale) will much more easier to do instead of increasing capacity and specification in one single server. This will have much more advantages for high levels concurrency transaction. Using load balancer, we can increase or remove server easily without downtime and of course will drive to cost savings in final result.

In this post we will use and configure NGinX as load balance using 3 major techniques :

  1. Round-Robin
  2. Least Connections
  3. Hash Load Balancer

 

ROUND ROBIN

This is could be the most common and simple mechanism in load balancer which the request will be processed by the first available server in defined stack, second request will be handled by the second server in stack and so on in rotational basis. This could be seen in loop pattern, which the handling process request will back to first server when last server has processed the request.

Round Robin load balancer has one advantages, that no need further configuration in server side, but also has disadvantages that will ignore current server load because no such check mechanism to ensure that current server can handle the request.

For example we have two identical rails apps which run each on port 3000 and 4000 in local server, and has each own id. Imagine this only small apps which only echo out the the app id. The nginx config will be like this :

Inside upstream block directive under http directive, we define two server and name of the server. Along with first server definition (127.0.0.1:3000), we also define weight to indicate the server capacity. it means, first server will serve request 3 times higher than other server. this will be useful if we have two different server with different capacity.  in server directive, we use proxy_pass directive to point all traffic to upstream directive.

NGinX will use round-robin algorithm by default because we didn’t specify any algorithm. this configuration will make all request served sequentially by upstream server. first request will handled by first server, second request by second server and so on.

LEAST CONNECTION

In Least Connection algorithm, request will distributed to server with lower active connection. This mechanism different with round-robin that does not consider any server load or response time when distributing request. Least Connection algorithm assumes connection proportional to server load.

This is sample of NGinx config using Least Connection Algorithm:

as we can see from config above, the main difference is we define algorithm least_conn in upstream directive.

this will make when request come in, NGinx will determine which server has lower active connections and will assign the request into this server.

HASH LOAD BALANCING

When a Load balancer is configured to use the hash method, it computes a hash value, commonly client’s IP address is used as pattern to match, then sends the request to the server, ensuring that connections within existing user sessions are consistently routed to the same back-end servers. This means every subsequent request from same hash will always route to same upstream server.

The config it self same as with RR and Least Connection algorithm. The difference is we define hash in upstream block, and use client remote address as the key factor to build the hash.

 

As conclusion, NGinX support these 3 Algorithm when we set NGinX as load balancer. From the simple one Round-robin algorithm to more complex hash algorithm.

For further reading can refer to NGinX documentation here.

 

Jakarta, May 7Th 2018

 

Aah Ahmad Kusumah

Advertisements

Spin Up Terraform and Chef – Part 2

April 23, 2017 Leave a comment

This the second part from this post . In this post we will cover Chef, how to write recipes, configure and test our recipes on target host managed by Vagrant.

What is chef ? chef is configuration management and automation platforms to achieve speed, scalability and consistency by turning infrastructure into flexible, human-readable, versionable and testable code.
User only need to write recipes that describe chef how to manage servers and applications and how they are configured. Chef offers :
1. Consistency

it means, we can write recipes and we can run it multiple times in different or same machine with same result.

2. Efficiency

Chef can configure thousand servers and put all configuration in one place. no more scattering code for each servers configuration

3. Scalability

Chef has capability to scale up infratructure with well manage code using roles, nodes and enviroment.

4. Reusing

We can re-use recipes and cookbooks with ease, and will produce same result.

Next, we will try to learn how we write simple configuration using chef from scratch.
We need to install the following pre-requisites installed before getting started with chef :
1. Vagrant -> For managing Virtual Machine
2. VirtualBox -> Virtual Machine from oracle
3. Ruby -> for writing recipes and resources
4. Git (optional) if want to store the code in git respository

Let start :
*Preparation*

1. Create folder for storing our recipes, let’s name it sheeps-nolegs

mkdir sheeps-nolegs && cd sheeps-nolegs

2. Create gemset , let name it chef

rvm gemset create chef
rvm gemset use chef

3. Create Gemfile file inside it, consist the following line :

source ‘https://rubygems.org’

gem ‘berkshelf’
gem ‘knife-solo’

4. Run bundle install

bundle install

knife-solo is command line tools that provides interfaces between local-chef repo and chef server.
knife-solo has 5 core commands,
1. knife solo init -> create strutured directory for chef
2. knife solo prepare -> install chef on given host
3. knife solo cook -> upload kitchen into given host and run chef-solo on it
4. knife solo bootstrap -> combination between prepare and cook
5. knife solo clean -> remove uplaoded kitchen fron given host

Berkshelf will manage cookbooks and the dependencies.

*Working With Chef*

1. Lets create chef structured folder using knife solo

knife solo init .

will produce the folllowing folder :
a. .chef -> hidden folder contains knife.rb and pem files
b. Berksfile -> Berks file contains sources for cookbooks to download
c. cookbooks -> folder to store vendor cookbooks
d. data_bags -> folder to store chef data bags
e. environments -> folder for Chef environment
f. nodes -> folder containing chef nodes
g. roles -> folder containing chef roles
h. site-cookboks -> folder to store custom cookbooks

2. In .chef folder we have file knife.rb, it consists of default chef-repo-specific configuration

a. cookbook_path -> the sub-directory for cookbooks
b. node_path -> the sub-directory for nodes
c. role_path -> the sub-directory for roles
d. environment_path >- the sub-directory for environments
e. data_bag_path -> the sub-directory for Data Bags
f. knife[:berkshelf_path] -> directory for vendoring coookbooks from Berksfile

3. Let’s try to install some cookbook into a node. in this case will try to install apache2. Edit Berksfile, and and the following line :

source “http://api.berkshelf.com”
cookbook ‘apache2’

4. Execute berks command to install apache2 cookbooks

berks install

this will install apache2 with it’s dependencies
5. then execute berks vendor cookbooks, to move cookbooks from ~/.berkshelf/cookbooks into cookbooks folder


6. Lets define a nodes.

Node represents any physical, virtual machine or clouds. Basically, this file is named as machine domain, such as sheeps.com. it consist of valid JSON configuration for specific machine.
– lets create node for sheeps.com
– vi sheeps.com.json
{
        “name”: “sheeps.com”,
        “run_list”: [
              “recipe[apache2]”
        ]
  }
– run_list is the main configuration in this file, this will contains arrays of recipes and roles.
– in sample above, it will execute recipe apache2 from apache2 cookbooks

*Vagrant*

In this post, for testing cookbooks, we will use vagrant. Vagrant is free and open-source software for creating and configuring virtual development environments.

1. Download Vagrant from this link, and follow installation instruction
2. Vagrant use base image instead of creating virtual machine from scratch, this bases known as boxes in vagrant. let’s use ubuntu 12.04 (precise64) as base image, or wen can find another boxes in vagrant cloud or https://atlas.hashicorp.com/search.

a. create vagrantfile inside our chef-solo directory
vagrant box add precise64 http://files.vagrantup.com/precise64.box –force
this will download precise64 box from repository, and option –force, to replace any existing precise64 box.
vagrant init precise64
this command will create Vagrantfile, which is typically should loook like this

b. For checking vagrant is running well invoke the following command :
vagrant up
then we can do vagrant ssh for testing ssh connection into vagrant running vm.
*notes, if password prompted, use ‘vagrant’ as default password provided for vagrant user


c. in some cases, this vm won’t have any chef client, we can install it on target vm using knife solo prepare, like we discussed above.
knife solo prepare vagrant@localhost -i ~/.vagrant.d/insecure_private_key -p 2222 -N sheeps.com
option -i to specify ssh key for machine
option -p to specify ssh port on target vm
option -N to specify which node will be used

d. now we can run our kitchen on node using the knife solo cook command
knife solo cook vagrant@localhost -i ~/.vagrant.d/insecure_private_key -p 2222 -N sheeps.com
e. by default apache2 will run on port 80, we can forward it to another port , let say 8080. add the following command into Vagrantfile :
config.vm.network :forwarded_port, guest: 80, host: 8080
f. The invoke vagrant reload, to reload our target vm
g. and try to access http://localhost:8080, if return 404, it means our apache2 has been successfully installed

End Of Part 2
In next post we will deep on how to defines roles, environment, data_bags, and create custom recipes

 

Jakarta, 23 April 2017

 

Aah Ahmad Kusumah

Categories: Devops, Tutorial Tags: , , ,

Spin Up Terraform and Chef – Part 1

February 24, 2017 3 comments

What is terraform ? cited from their documentation :

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

For detail feature and other terraform offer, please refer to their concise and complete documentation.

Terraform has four main component :

  1. Providers : terraform can talk to back-end service providers, such as AWS, Digital Ocean, etc
  2. Resources : resources is building blocks of terraform configuration 
  3. Variables : variables is to store all variables within terraform configuration. this will make terraform configuration will be more friendly and flexible
  4. Configuration : is *.tf extension file to store terraform configuration

This post won’t explain detail about feature, will only consist of two part :

  1. Installation and Basic configuration
  2. Test and Example to provision environment on AWS

Terraform Installation

Terraform installation is quite straight.

  1. Download installation archive here, and choose appropriate version depend on your system, in this post will demonstrate on Mac OS system. Linux could be similar.
  2. Extract downloaded zip, and copy terraform binary into folder will be set as terraform path, let say in /Users/kusumah/Documents/Development/terraform
  3. Set path , mine using .bash_profile screen-shot-2017-02-23-at-11-39-08-am
  4. Invoke command source ~/.bash_profile to update environment immediately
  5. Check installation with invoking terraform command, and if success, the output will be like this screen-shot-2017-02-23-at-11-41-48-am

Next step, will try spin up AWS server using terraform.

Terraform AWS

After finishing terraform installation, lets try to spin up simple AWS server using terraform.

  1. Create workspace directory, in this case i’ll create on /Users/kusumah/Documents/TUTORIAL/TERRAFORM/sample
  2. Create terraform configuration, lets named it as spinupserver.tf. below is my simple configuration: spinupline 2-6, we define provider will talk to. in this case, we’ll use AWS. for access_key and and secret_key can be obtained from AWS. region, define what AWS region will be used. Line 9-12, we define AWS key pair to access created AWS instance. this consist of key_name and public_key. for generating public key in linux machine, can refer this link. Line 15-21, define aws instance type.
  3. Save that file in our workspace directory
  4. Now Invoke command terraform plan , and the result when success, will look like this screen-shot-2017-02-24-at-11-35-18-am
  5. Now we can invoke terraform apply to apply the plan: result
  6. and voila now, our new instance created successfully in AWS screen-shot-2017-02-24-at-11-39-14-am
  7. To destroy current plan and terminated instance, just invoke terraform destroy, it will automatically terminated created instance defined in terraform configuration screen-shot-2017-02-24-at-11-42-44-am

 

in next post, will cover how to make terraform configuration more well structured, and we will try combine with Chef one of most popular configuration management tools among DevOps community (my opinion).

 

Jakarta, February 24th 2017

 

Aah Ahmad Kusumah

Categories: Devops, Tutorial Tags: , , ,