Archive

Author Archive

Load Balancing With NGinX

May 7, 2018 Leave a comment

Why we need load balancer ? can we just create one very stable server which is the failure rate zero and when we need to implement this load balancer ?

Load balancing at least has two main purposes :

  1. Increasing fault tolerance
  2. Distributing load

Increasing fault tolerance means ensuring and increasing uptime and reliability current application or system. This is very critical for such services which sale high availability of their services which is once downtime happened exceeding the tolerance limit, will cause big trouble and endanger their business continuity. Nowadays, for example when services like Facebook, or whatsapp down, users which are rely on FB and Whatsapp as their backbone business, will lose trust and if this happened frequently will abandoned FB and Whatsapp and worst case will move to competitor services.

Second purposes of load balancing is distributing load. This load distribution make scaling (Horizontal Scale) will much more easier to do instead of increasing capacity and specification in one single server. This will have much more advantages for high levels concurrency transaction. Using load balancer, we can increase or remove server easily without downtime and of course will drive to cost savings in final result.

In this post we will use and configure NGinX as load balance using 3 major techniques :

  1. Round-Robin
  2. Least Connections
  3. Hash Load Balancer

 

ROUND ROBIN

This is could be the most common and simple mechanism in load balancer which the request will be processed by the first available server in defined stack, second request will be handled by the second server in stack and so on in rotational basis. This could be seen in loop pattern, which the handling process request will back to first server when last server has processed the request.

Round Robin load balancer has one advantages, that no need further configuration in server side, but also has disadvantages that will ignore current server load because no such check mechanism to ensure that current server can handle the request.

For example we have two identical rails apps which run each on port 3000 and 4000 in local server, and has each own id. Imagine this only small apps which only echo out the the app id. The nginx config will be like this :

Inside upstream block directive under http directive, we define two server and name of the server. Along with first server definition (127.0.0.1:3000), we also define weight to indicate the server capacity. it means, first server will serve request 3 times higher than other server. this will be useful if we have two different server with different capacity.  in server directive, we use proxy_pass directive to point all traffic to upstream directive.

NGinX will use round-robin algorithm by default because we didn’t specify any algorithm. this configuration will make all request served sequentially by upstream server. first request will handled by first server, second request by second server and so on.

LEAST CONNECTION

In Least Connection algorithm, request will distributed to server with lower active connection. This mechanism different with round-robin that does not consider any server load or response time when distributing request. Least Connection algorithm assumes connection proportional to server load.

This is sample of NGinx config using Least Connection Algorithm:

as we can see from config above, the main difference is we define algorithm least_conn in upstream directive.

this will make when request come in, NGinx will determine which server has lower active connections and will assign the request into this server.

HASH LOAD BALANCING

When a Load balancer is configured to use the hash method, it computes a hash value, commonly client’s IP address is used as pattern to match, then sends the request to the server, ensuring that connections within existing user sessions are consistently routed to the same back-end servers. This means every subsequent request from same hash will always route to same upstream server.

The config it self same as with RR and Least Connection algorithm. The difference is we define hash in upstream block, and use client remote address as the key factor to build the hash.

 

As conclusion, NGinX support these 3 Algorithm when we set NGinX as load balancer. From the simple one Round-robin algorithm to more complex hash algorithm.

For further reading can refer to NGinX documentation here.

 

Jakarta, May 7Th 2018

 

Aah Ahmad Kusumah

Advertisements

Fixing Encoding Issue on wordpress

January 4, 2018 Leave a comment

Last Week, i was struggling to revive old blog which is still using old wordpress engine due too machine failure. And migrate it using nginx and php-fpm instead of apache2 web server.

Migration step, will be explain later in separate post. This post will focus in one major problem i face while migrating the blog, which is encoding issue. for information this blog using arabic as primary language and all post published in arabic. I notice this issue, when see all menus and posts show bizzare character instead of arabic.

It wasn’t arabic and even not a regular text. i read a lot of post in wordpress forum to solve this issue. but no luck, some people suggest to comment out lines in wp-config.php which is used for column encoding, DB_CHARSET and DB_COLLATE, but no luck. the blog still show unreadable character.

Then i realised, the problem is all not about encoding issue while displaying blog content, but all posts and setting was stored in DB using this character instead of normal character.

First thing cross in my mind is how we can restore this weird character into original arabic character ?. can we use convert and cast function in mysql to revive the content in arabic ?

I read this nice mysql documentation , then try to find perfect encoding. First attempt, convert content to latin1 and convert it to utf-8, but the failed. after several attempt, finally find correct combination using this query :

After found this correct cast and convert combination, the hardest part is update all records. what i did was creating small query for each table, each related column to revive all record in correct arabic content.

and this the result :

 

Jakarta, 1st Januari 2018

 

A. Ahmad Kusumah

Categories: General

Spin Up Terraform and Chef – Part 2

April 23, 2017 Leave a comment

This the second part from this post . In this post we will cover Chef, how to write recipes, configure and test our recipes on target host managed by Vagrant.

What is chef ? chef is configuration management and automation platforms to achieve speed, scalability and consistency by turning infrastructure into flexible, human-readable, versionable and testable code.
User only need to write recipes that describe chef how to manage servers and applications and how they are configured. Chef offers :
1. Consistency

it means, we can write recipes and we can run it multiple times in different or same machine with same result.

2. Efficiency

Chef can configure thousand servers and put all configuration in one place. no more scattering code for each servers configuration

3. Scalability

Chef has capability to scale up infratructure with well manage code using roles, nodes and enviroment.

4. Reusing

We can re-use recipes and cookbooks with ease, and will produce same result.

Next, we will try to learn how we write simple configuration using chef from scratch.
We need to install the following pre-requisites installed before getting started with chef :
1. Vagrant -> For managing Virtual Machine
2. VirtualBox -> Virtual Machine from oracle
3. Ruby -> for writing recipes and resources
4. Git (optional) if want to store the code in git respository

Let start :
*Preparation*

1. Create folder for storing our recipes, let’s name it sheeps-nolegs

mkdir sheeps-nolegs && cd sheeps-nolegs

2. Create gemset , let name it chef

rvm gemset create chef
rvm gemset use chef

3. Create Gemfile file inside it, consist the following line :

source ‘https://rubygems.org’

gem ‘berkshelf’
gem ‘knife-solo’

4. Run bundle install

bundle install

knife-solo is command line tools that provides interfaces between local-chef repo and chef server.
knife-solo has 5 core commands,
1. knife solo init -> create strutured directory for chef
2. knife solo prepare -> install chef on given host
3. knife solo cook -> upload kitchen into given host and run chef-solo on it
4. knife solo bootstrap -> combination between prepare and cook
5. knife solo clean -> remove uplaoded kitchen fron given host

Berkshelf will manage cookbooks and the dependencies.

*Working With Chef*

1. Lets create chef structured folder using knife solo

knife solo init .

will produce the folllowing folder :
a. .chef -> hidden folder contains knife.rb and pem files
b. Berksfile -> Berks file contains sources for cookbooks to download
c. cookbooks -> folder to store vendor cookbooks
d. data_bags -> folder to store chef data bags
e. environments -> folder for Chef environment
f. nodes -> folder containing chef nodes
g. roles -> folder containing chef roles
h. site-cookboks -> folder to store custom cookbooks

2. In .chef folder we have file knife.rb, it consists of default chef-repo-specific configuration

a. cookbook_path -> the sub-directory for cookbooks
b. node_path -> the sub-directory for nodes
c. role_path -> the sub-directory for roles
d. environment_path >- the sub-directory for environments
e. data_bag_path -> the sub-directory for Data Bags
f. knife[:berkshelf_path] -> directory for vendoring coookbooks from Berksfile

3. Let’s try to install some cookbook into a node. in this case will try to install apache2. Edit Berksfile, and and the following line :

source “http://api.berkshelf.com”
cookbook ‘apache2’

4. Execute berks command to install apache2 cookbooks

berks install

this will install apache2 with it’s dependencies
5. then execute berks vendor cookbooks, to move cookbooks from ~/.berkshelf/cookbooks into cookbooks folder


6. Lets define a nodes.

Node represents any physical, virtual machine or clouds. Basically, this file is named as machine domain, such as sheeps.com. it consist of valid JSON configuration for specific machine.
– lets create node for sheeps.com
– vi sheeps.com.json
{
        “name”: “sheeps.com”,
        “run_list”: [
              “recipe[apache2]”
        ]
  }
– run_list is the main configuration in this file, this will contains arrays of recipes and roles.
– in sample above, it will execute recipe apache2 from apache2 cookbooks

*Vagrant*

In this post, for testing cookbooks, we will use vagrant. Vagrant is free and open-source software for creating and configuring virtual development environments.

1. Download Vagrant from this link, and follow installation instruction
2. Vagrant use base image instead of creating virtual machine from scratch, this bases known as boxes in vagrant. let’s use ubuntu 12.04 (precise64) as base image, or wen can find another boxes in vagrant cloud or https://atlas.hashicorp.com/search.

a. create vagrantfile inside our chef-solo directory
vagrant box add precise64 http://files.vagrantup.com/precise64.box –force
this will download precise64 box from repository, and option –force, to replace any existing precise64 box.
vagrant init precise64
this command will create Vagrantfile, which is typically should loook like this

b. For checking vagrant is running well invoke the following command :
vagrant up
then we can do vagrant ssh for testing ssh connection into vagrant running vm.
*notes, if password prompted, use ‘vagrant’ as default password provided for vagrant user


c. in some cases, this vm won’t have any chef client, we can install it on target vm using knife solo prepare, like we discussed above.
knife solo prepare vagrant@localhost -i ~/.vagrant.d/insecure_private_key -p 2222 -N sheeps.com
option -i to specify ssh key for machine
option -p to specify ssh port on target vm
option -N to specify which node will be used

d. now we can run our kitchen on node using the knife solo cook command
knife solo cook vagrant@localhost -i ~/.vagrant.d/insecure_private_key -p 2222 -N sheeps.com
e. by default apache2 will run on port 80, we can forward it to another port , let say 8080. add the following command into Vagrantfile :
config.vm.network :forwarded_port, guest: 80, host: 8080
f. The invoke vagrant reload, to reload our target vm
g. and try to access http://localhost:8080, if return 404, it means our apache2 has been successfully installed

End Of Part 2
In next post we will deep on how to defines roles, environment, data_bags, and create custom recipes

 

Jakarta, 23 April 2017

 

Aah Ahmad Kusumah

Categories: Devops, Tutorial Tags: , , ,

Spin Up Terraform and Chef – Part 1

February 24, 2017 3 comments

What is terraform ? cited from their documentation :

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

For detail feature and other terraform offer, please refer to their concise and complete documentation.

Terraform has four main component :

  1. Providers : terraform can talk to back-end service providers, such as AWS, Digital Ocean, etc
  2. Resources : resources is building blocks of terraform configuration 
  3. Variables : variables is to store all variables within terraform configuration. this will make terraform configuration will be more friendly and flexible
  4. Configuration : is *.tf extension file to store terraform configuration

This post won’t explain detail about feature, will only consist of two part :

  1. Installation and Basic configuration
  2. Test and Example to provision environment on AWS

Terraform Installation

Terraform installation is quite straight.

  1. Download installation archive here, and choose appropriate version depend on your system, in this post will demonstrate on Mac OS system. Linux could be similar.
  2. Extract downloaded zip, and copy terraform binary into folder will be set as terraform path, let say in /Users/kusumah/Documents/Development/terraform
  3. Set path , mine using .bash_profile screen-shot-2017-02-23-at-11-39-08-am
  4. Invoke command source ~/.bash_profile to update environment immediately
  5. Check installation with invoking terraform command, and if success, the output will be like this screen-shot-2017-02-23-at-11-41-48-am

Next step, will try spin up AWS server using terraform.

Terraform AWS

After finishing terraform installation, lets try to spin up simple AWS server using terraform.

  1. Create workspace directory, in this case i’ll create on /Users/kusumah/Documents/TUTORIAL/TERRAFORM/sample
  2. Create terraform configuration, lets named it as spinupserver.tf. below is my simple configuration: spinupline 2-6, we define provider will talk to. in this case, we’ll use AWS. for access_key and and secret_key can be obtained from AWS. region, define what AWS region will be used. Line 9-12, we define AWS key pair to access created AWS instance. this consist of key_name and public_key. for generating public key in linux machine, can refer this link. Line 15-21, define aws instance type.
  3. Save that file in our workspace directory
  4. Now Invoke command terraform plan , and the result when success, will look like this screen-shot-2017-02-24-at-11-35-18-am
  5. Now we can invoke terraform apply to apply the plan: result
  6. and voila now, our new instance created successfully in AWS screen-shot-2017-02-24-at-11-39-14-am
  7. To destroy current plan and terminated instance, just invoke terraform destroy, it will automatically terminated created instance defined in terraform configuration screen-shot-2017-02-24-at-11-42-44-am

 

in next post, will cover how to make terraform configuration more well structured, and we will try combine with Chef one of most popular configuration management tools among DevOps community (my opinion).

 

Jakarta, February 24th 2017

 

Aah Ahmad Kusumah

Categories: Devops, Tutorial Tags: , , ,

Grant Table User to Another User [Oracle]

August 22, 2016 Leave a comment

Below is snippet command for granting user table to another user

declare
  cursor t_name is select table_name from user_tables ;
  command varchar2(500);
begin
for c in t_name loop
command := 'GRANT SELECT, INSERT, UPDATE, DELETE ON ' ||c.table_name|| ' TO <other_user>';
   dbms_output.put_line(command);
   execute immediate command;
end loop;
end;


Jakarta, 24 August 2016

 

A. Ahmad Kusumah

Install ORACLE on AIX 6.1

July 24, 2016 Leave a comment

Oracle 11g Installation procedure on AIX system v.6.1 environment, on IBM P-series machine is summarized from best practice at one of our  client. This procedure, will be followed by Websphere Application Server Installation procedure and configuration on the same environment and machine.

The steps of Oracle Installation 11g on AIX System is briefly described as follow :

1. Please check software pre-requisite on AIX system :

  • bos.adt.base
  • bos.adt.lib
  • bos.adt.libm
  • bos.perf.libprefstat
  • bos.perf.perfstat
  • bos.perf.proctools
  • xlC.aix50.rte 8.0.0.8 or later
  • xlC.rte 8.0.0 or later

2. Run command below to verify the pre-requisite :

  • lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat bos.perf.libperfstat bos.perf.proctoolschecking

3. Change value of max number of processes allowed  :

  • Run “smit chgsys”
  • Set “Maximum number of PROCESSES allowed per user” > 2048
  • Verify value of “ARG/ENV list size in 4K byte blocks” >= 128smith

4. Create user and groups for oracle installation, using the following command:

  • mkgroup oinstall
  • mkgroup dba
  • mkgroup oper
  • useradd -g oinstall -G dba, oper -m oracle
  • passwd oracle #Set password for oracle useruseroracle

5.  Create oracle home directory and set ownership and privileges

  • mkdir -p /database/oracle/app
  • chown -R oracle:oinstall /database/oracle/app
  • chmod -R 755 /database/oracle/apphomeoracle

6. Change display setting on AIX

  • vi /home/oracle/.profile
  • add line : “DISPLAY=:1.0; export DISPLAY

7. Set ORACLE_HOME

  • vi /home/oracle/.profile
  • add the following lines :
  • ORACLE_BASE  = /opt.app/oracle
  • ORACLE_SID = orcl
  • export ORACLE_BASE
  • export ORACLE_SID
  • ORACLE_HOME = $ORACLE_BASE/product/11.1.0/db1
  • PATH = $ORACLE_HOME/bin:$PATH
  • export ORACLE_HOME
  • export PATHpathoracle

8. Switch user to oracle and run installer as usual with options -ignorePrereq

runinstaller

 

install

 

and follow installation procedure till finish …

 

Bogor, Sunday 24 July 2016

 

A. Ahmad Kusumah

 

 

 

 

 

Pointing Domain into AWS EC2

July 22, 2016 Leave a comment

Pointing domain into AWS EC2 is quite simple, what need todo are :

  1. Create EC2 instance, then write down the IP address (i.e 10.10.2.10)EC2 Management Console 2016-07-22 15-31-35
  2. Open Route 53 in AWS console (https://console.aws.amazon.com/route53/)
  3. Now you are in route 53, select DNS Management, or choose Hosted Zones in sidebar menu, then select Create Hosted ZoneRoute 53 Management Console 2016-07-22 15-35-40
  4. Type  your domain name, select type to Public Hosted Zone, then press Button CreateRoute 53 Management Console 2016-07-22 15-36-51
  5. Then Create Record Set, and create A record, and point address to your EC2 public IP in point 1Route 53 Management Console 2016-07-22 15-41-38
  6. Then write down NS from Created Hosted Zone, and change you domain name NS using NS from Hosted Zone. the changes will reflect immediatelly, but for some provider need to wait till 24 hours.Route 53 Management Console 2016-07-22 15-43-13
  7. And voila, now you have successfully point and associate your domain name into your AWS EC2 instance.

 

Jakarta, Friday 22 July 2016

 

A. Ahmad Kusumah