Sunday, November 27, 2016

Oracle Linux - Consul failed to sync remote state: No cluster leader

Whenever you are installing and running Consul from HashiCorp on Oracle Linux you might run into some strange errors. Even though your configuration JSON file passes the configuration validation the log file contains a long repetitive lst of the same error complaining about "failed to sync remote state: No cluster leader" and " coordinate update error: No cluster leader".

Consul is a tool for service discovery and configuration. It provides high level features such as service discovery, health checking and key/value storage. It makes use of a group of strongly consistent servers to manage the datacenter. Consul is developed by HasiCorp and is available from its own website.

It might be that you have the below output when you start consul:

    2016/11/25 21:03:50 [INFO] raft: Initial configuration (index=0): []
    2016/11/25 21:03:50 [INFO] raft: Node at [Follower] entering Follower state (Leader: "")
    2016/11/25 21:03:50 [INFO] serf: EventMemberJoin: consul_1
    2016/11/25 21:03:50 [INFO] serf: EventMemberJoin: consul_1.private_dc
    2016/11/25 21:03:50 [INFO] consul: Adding LAN server consul_1 (Addr: tcp/ (DC: private_dc)
    2016/11/25 21:03:50 [INFO] consul: Adding WAN server consul_1.private_dc (Addr: tcp/ (DC: private_dc)
    2016/11/25 21:03:55 [WARN] raft: no known peers, aborting election
    2016/11/25 21:03:57 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:04:14 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:04:30 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:04:50 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:05:01 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:05:26 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:05:34 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:06:02 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:06:10 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:06:35 [ERR] agent: coordinate update error: No cluster leader

The main reason for the above is that you try to start consul in an environment where there is no cluster available, or it is the first node of the cluster. In case you start it as the first node of the cluster or as the only node of the cluster you have to ensure that you include -bootstrap-expect 1 as a command line option when starting (in case you will only have one node).

You can also include "bootstrap_expect": 1 in the json configuration file if you use a configuration file to start Consul.

As an example, the below start of Consult will prevent the above errors:

consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul

Friday, November 25, 2016

Oracle Linux - build Elasticsearch configuration

With the latest version of Elasticsearch the directives used to ensure your Elasticsearch daemon is listening to the correct interfaces on your Linux machine have changed. By default Elasticsearch will listen on your local interface only which is a bit useless in most cases.

Whenever deploying Elasticsearch manually it will not be a problem to configure it manually, however, we are moving more and more to a world where deployments are done fully automatic. In case you use fully automatic deployment and depend on bash scripting to do some of the tasks for you the below scripts will be handy to use.

In my case I used the below scripts to automatically configure Elasticsearch on Oracle Linux 6 instances to listen on all available interfaces to ensure that Elasticsearch is directly useable for external servers and users.

To ensure your Elasticsearch daemon is listening on all ports you will have to ensure the below line is available, at least in my case as I have 2 external and one local loopback interface in my instance. _eth0_,_eth1_,_local_

When you are sure your machine will always have 2 external network interfaces and one local loopback interface you want Elasticsearch to listen on you could hardcode this. However, if you want to make a more generic and stable solution you should read the interface names and build this configuration line.

The ifconfig command will give you the interfaces in a human readable format which is not very useable in a programmatic manner. However, ifconfig will provide you the required output which means we can use it in combination with sed to get a list of the interface names only. The below example shows this:

[root@localhost tmp]# ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d'
[root@localhost tmp]#

However, this is not in the format we want it, so we have to create a small script to make sure we do get it more in the format we want it. The below code example can be used for this:


  for OUTPUT in $(ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d')
   echo "_"$OUTPUT"_"

If we execute this we will have the following result:

[root@localhost tmp]# ./
[root@localhost tmp]#

As you can see it is looking more like how we want to have this as input for the Elasticsearch configuration file however we are not fully done. First of all the _local_ is missing and we still have it in a multi-line representation. The below code example shows the full script you can use to build the configuration line. We have added the _local_ and we use awk to make sure it is one comma separated line you can use.

  for OUTPUT in $(ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d')
   echo "_"$OUTPUT"_"
echo "_local_"
 } | awk -vORS=, '{ print $1 }' | sed 's/,$/\n/'

If we run the above code we will get the below result:

[root@localhost tmp]# ./
[root@localhost tmp]#

You can use this in a more wider script to ensure it is written (including to the /etc/elasticsearch/elasticsearch.yml file which is used by Elasticsearch as the main configuration file. As stated, I used this script and tested in while deploying Elasticsearch on Oracle Linux 6. It is expected to be working on other Linux distributions however it has not been tested.

Monday, November 21, 2016

Using Oracle cloud to integrate Salesforce and Amazon hosted SAP

Oracle Integration Cloud Service (ICS) delivers “Hybrid” Integration. Oracle Integration Cloud Service is a simple and powerful integration platform in the cloud to maximize the value of your investments in SaaS and on-premises applications. It includes an intuitive web based integration designer for point and click integration between applications and a rich monitoring dashboard that provides real-time insight into the transactions, all running on Oracle Public Cloud. Oracle Integration Cloud Service will help accelerate integration projects and significantly shorten the time-to-market through it's intuitive and simplified designer, an intelligent data mapper, and a library of adapters to connect to various applications.

Oracle Integration Cloud Service can also be leveraged during a transition from on premise to cloud or by building a multi-cloud strategy. As an example, Oracle provides a standardized connection between Salesforce and SAP as shown above.

An outline of how to achieve this integration is shown in the below video which outlines the options and easy of developing an integration between Salesforce and SAP and ensure the two solution work as an integrated and hybrid solution.

As enterprises start to move more and more to a full cloud strategy ensuring you have a central ingratiation point already in the cloud positioned is ideal when moving to a more cloud based strategy. As an example, SAP can run on Amazon. During a test and migration path to the cloud you most likely do want to ensure you can test your integration between salesforce and SAP without the need of re-coding and re-developing integration.

By ensuring you use Oracle Integration Cloud Service as your central integration solution the move to a cloud strategy for your non-cloud native applications becomes much more easy. You can add a second integration during your test and migration phase and when your migration to, for example, Amazon has been completed you can discontinue your integration to your old on premise SAP instances.

This will finally result in an all-cloud deployment where you have certain business functions running in Saleforce, your SAP systems running in Amazon while you leverage Oracle Integration Cloud Service to bind all systems together and make it a true hybrid multi-cloud solution.

Monday, November 14, 2016

Oracle Cloud API - authenticate user cookie

Whenever interacting with the API’s from the Oracle Compute service cloud the thing first thing that needs to be done is to authenticate yourself against the API. This is done by providing the required authentication details and in return you will receive a cookie. This cookie is used for the future API calls you will do until the cookie lifetime expiries.

The Oracle documentation shows an example as shown below:

curl -i -X POST -H "Content-Type: application/oracle-compute-v3+json" -d "@requestbody.json"

A couple of things you have to keep in mind when looking at example; this will execute the curl command against the REST API endpoint URL for the US0 cloud datacenter and it expect to have a file named requestbody.json available with the “payload” data.

In the example the payload file has the following content:
 "password": "acme2passwrd123",
 "user": "/Compute-acme/"

The thing to keep in mind when constructing your own payload JSON is that the Compute- part in Compute-acme should remain the “Compute-“ part. Meaning, if your identity domain is “someiddomain” it should look like “Compute- someiddomain” and not “someiddomain”

When executing the curl command you will receive the response like shown below:

HTTP/1.1 204 No Content
Date: Tue, 12 Apr 2016 15:34:52 GMT
Server: nginx
Content-Type: text/plain; charset=UTF-8
X-Oracle-Compute-Call-Id: 16041248df2d44217683a6a67f76a517a59df3
Expires: Tue, 12 Apr 2016 15:34:52 GMT
Cache-Control: no-cache
Vary: Accept
Content-Length: 0
Set-Cookie: nimbula=eyJpZGVudGl0eSI6ICJ7XC...fSJ9; Path=/; Max-Age=1800
Content-Language: en

The part that you need, and the actual cookie data which need to be used in later API calls is actually only a subpart of the response received. The part you need is:

nimbula=eyJpZGVudGl0eSI6ICJ7XC...fSJ9; Path=/; Max-Age=1800

The rest of the response is not directly needed.