Cluster Liferay? Vagrant up @ Liferay Symposium Italy 2016

Salve a tutti,

dopo ormai svariati anni che lavoro sulle metodologie Devops, con una particolare attenzione al configuration management, penso sempre che alcune tecniche non possano più stupire/interessare una platea di persone altamente specializzate.

Invece il feedback di quanto mostrato oggi, al Liferay Symposium 2016 tenutosi a Milano, è stato molto positivo ( grazie anche alla spettacolare partecipazione di Alessio Biancalana aka dottorblaster http://dottorblaster.it/ e Claudio Umana ).

Di cosa si è parlato?

Il tema era molto semplice, come dare agli sviluppatori la possibilità di tirare su un cluster Liferay in pochi minuti.

Effetto demo… dovevo avere un exit strategy nel caso in cui la demo si rompesse.. Eccolo qui:

https://www.youtube.com/watch?v=VofvrMGaBqc

Tool Devops utilizzati?

Ho optato per, un ormai consolidato binomio, Vagrant + Chef.

L’architettura?

Ormai, in azienda, sanno tutti che sono un fan di Ha-Proxy per cui l’ho posizionato davanti a tutto.

clustat

Screen Shot 2016-11-09 at 19.48.04.png

Sotto, i due backend Apache Tomcat con la versione enterprise di Liferay Portal 6.2 EE sp14, che comunicano tramite multicast.

default.admin.password=liferay
default.admin.screen.name=Admin
default.admin.email.address.prefix=admin
default.admin.first.name=Test
default.admin.last.name=Test
setup.wizard.enabled=false
users.reminder.queries.enabled=false
terms.of.use.required=false

web.server.display.node=true
org.quartz.jobStore.isClustered=true

cluster.link.enabled=true
ehcache.cluster.link.replication.enabled=true
lucene.replicate.write=true
net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml

dl.store.impl=com.liferay.portlet.documentlibrary.store.AdvancedFileSystemStore
dl.store.file.system.root.dir=<%=node[‘sourcesense_liferay’][‘data_nfs_mount’]%>/document_library

cluster.link.autodetect.address=192.168.50.4:3306

jdbc.default.driverClassName=com.mysql.jdbc.Driver
jdbc.default.url=jdbc:mysql://<%=node[‘sourcesense_mysql’][‘db_host’]%>/<%=node[‘sourcesense_mysql’][‘database’]%>?useUniCode=true&characterEncoding=UTF-8&useFastDateParsing=false
jdbc.default.username=<%=node[‘sourcesense_mysql’][‘dbuser’]%>

Sotto ancora, una macchina con NFS per il disco dati condiviso, e Mysql Server.

Ho versionato il tutto su https://github.com/lucky-sideburn/demo-liferay  dichiarando nel Vagrantfile, nodi come questo:

liferaynode01.vm.provision :chef_solo do |chef|
chef.roles_path = ‘./chef/roles’
chef.data_bags_path = ‘./chef/data_bags’
chef.run_list = [
‘role[java]’,
‘recipe[sourcesense_liferay]’
]
end

 Il codice usato per la demo lo trovate qui:

https://github.com/lucky-sideburn/demo-liferay

Sarebbe bello scorporare i vari cookbook e condividerli nella community Chef 🙂 Per cui prendetene e cheffizzatene tutti!

ciao!

Advertisements

A simple recipe for MongoDB clusters

Hi everybody!

my task of today, was to configure a MongoDB with redundancy and high availability…

I decided to write my own Chef cookbook to configure “replica”

Below the most important parts:

Install MongoDB packages

Screen Shot 2016-10-14 at 21.13.03.png

Use template for the main configuration file

screen-shot-2016-10-14-at-21-15-04

Enable Linux service of MongoDB at the boot

screen-shot-2016-10-14-at-21-15-57

What about the replica’s configuration?

I suggest to use a custom LWRP that execute rs.initiate() to declare the replica set, rs.add(); to add primary and secondary servers, rs.addArb() to add arbiter servers. You can loop over hashes like this:

foobar => { “secondaries” => [“mynode01:27017″,”mynode02:27017”]}

and a Chef provider like this

screen-shot-2016-11-09-at-21-38-31

I need to test my cluster, so let’s prepare a Vagrant file like this in order to manage all virtual machines concurrently

screen-shot-2016-10-14-at-21-23-16

Finally, Haproxy as reverse proxy and load balancer! Use autodiscovery (https://github.com/hw-cookbooks/haproxy) in order to find automatically the backends and “health check” to point to the right node after a new primary server’s promotion.

screen-shot-2016-10-14-at-21-25-20

 

Ad maiora!

Install Chef Server on Suse Linux Enterprise 11

Hi Folks!

Today I dealt with a problem… and I found a solution because Chef is a great tool!

At moment there is not an RPM for Suse Linux available from the official website, but this does not matter 🙂

Problem: Install Chef Server, Chefdk, Chef-manage into a Suse Linux Enterprise 11 virtual machine without installing the rpm packages of RHEL systems.

Screen Shot 2016-08-02 at 19.02.17.png

This is what you can do:

  1. Dowload the following packages:
    • chef-server-core-12.8.0-1.el6.x86_64.rpm,
    • chefdk-0.16.28-1.el6.x86_64.rpm,
    • chef-manage-2.4.1-1.el6.x86_64.rpm
  2. Extract all stuff from RPM with:
    • rpm2cpio  chef-manage-2.4.1-1.el6.x86_64.rpm   | cpio -idmv

  3. Move content of the extraction to the correct folders: /opt/{chef,chef-manage,opscoode}
  4. Set PATH=”/opt/opscode/bin:/opt/chefdk/bin/:/data/opt/chef-manage/bin:$PATH” in your profile login script
  5. chef-server-ctl reconfigure
  6. chef-manage-ctl reconfigure
  7. again chef-server-ctl reconfigure

At the end all services are up and running

Screen Shot 2016-08-02 at 19.12.35.png

and my workstation too 🙂

Chef Automate – Installation guide

Hi guys!

let’s take a look of Chef Automate.. In this post we will se how to install it quickly.

Screen Shot 2016-07-28 at 12.38.39

I will install it through Vagrant but you can use my cookbook into a Chef Server.


Requirements:

  1. a Chef Server. Change default[‘chef_automate’][‘chef_server’][‘url’] with the correct IP
  2. an user’s key (client.pem) of a member of your Chef Server Organization. Change default[‘chef_automate’][‘key’][‘base’] and default[‘chef_automate’][‘key’][‘name’] with your values
  3. a Virtualbox private network 192.168.56.0 (or you can set a port forwarding into the Vagrantfile in order access to the webserver through  http://127.0.0.1)
  4. a delivery.license file. Put it into the cookbook directory. You can see it on /vagrant into the guest vm.

Start the provision..

  1. git clone https://github.com/lucky-sideburn/chef_automate.git
  2. vagrant up
  3. https://automate-box01/e/umbrella_corporation/ or use your preferred internal IP or use port forwarding to 127.0.0.1
  4. Select your enterprise

Screen Shot 2016-07-28 at 19.47.56

5. Enjoy!

Screen Shot 2016-07-28 at 20.21.16


 

Thanks!

Eugenio Marzo – Devops Engineer @Sourcesense


sourcesenseLogo266

Autoscaling with EC2 and Chef

Dear all,

It has been a long time since my last post and here I am with a new one, just to keep track of my current study case…

I would like to put in place an auto-scaling mechanism for my lab platform.

Currently I have one Ha-Proxy load balancer with 2 backends. I will perform stress test on my front-end  with Jmeter and create automatically a virtual machine joined to my Chef infrastructure in order to increase resources.

In this post I will describe just how to set  up an initial configuration of autoscaling-group + Chef ( today it is Friday… on Monday I will do the rest 😉

Let’s start  with the needed components:

  1. a Chef server
  2. one HaProxy load balancer
  3. two tomcat backend

Now I try the script for the unattended bootstrap. This script adds a new node under the Chef Server. I tried it on a simple virtual machine locally, using a Centos 7 running in Virtualbox.

[ ! -e /etc/chef ] && mkdir /etc/chef

cat <<EOF > /etc/chef/validation.pem
-----BEGIN RSA PRIVATE KEY-----
your super secret private key :)
-----END RSA PRIVATE KEY-----
EOF

cat <<EOF > /etc/chef/client.rb
log_location STDOUT
chef_server_url "https://mychefserver.goofy.goober/organizations/myorg"
ssl_verify_mode :verify_none
validation_client_name "myorg-validator"
EOF

cat <<EOF > /etc/chef/first-boot.json


{
 "run_list": ["role[tomcat_backend]"]
}


EOF

curl -L https://www.opscode.com/chef/install.sh | \
bash -s -- -v 12.9.41 &> /tmp/get_chef.log
chef-client -E amazon_demo -j /etc/chef/first-boot.json  \
&> /tmp/chef.log 


If things have done correctly you will see the new node into your Chef server dashboard..Check the logs on the new node in case of problems..

/tmp/chef.log
/tmp/get_chef.log

Now let’s create the autoscaling-group in Amazon EC2

Screen Shot 2016-05-06 at 13.43.35.png

Then select your preferred instance… I am using RHEL 7.2

Screen Shot 2016-05-06 at 13.44.48.png

Insert the bootstrap script “User data file” (the one we just created)

Screen Shot 2016-05-06 at 13.49.17.png

I have no instances running on my cloud, so the following configuration will generate a virtual machine due to the min required is 1.

Screen Shot 2016-05-06 at 17.42.56.png

After a minute I got an email saying:

Description: Launching a new EC2 instance: $my_id_istance
Cause: At 2016-05-06T15:10:17Z an instance was started in response to a 
difference between desired and actual

Finally I have a new configured node in my Chef server.. . which is the autoscaling_node01.

Screen Shot 2016-05-06 at 16.00.44.png

That’s all folks!

Bye for now…

Eugenio Marzo
DevOps Engineer at SourceSense