Thursday, March 23, 2017

Oracle Cloud - Architecture Blueprint - microservices transport protocol encryption

The default way microservices communicate with each other is based upon the http protocol. When one microservice needs to call another microservice it will initiate a service call based upon a HTTP request. The HTTP request can be all of the standard methods defined in the HTTP standard such as GET, POST and PUT. In effect this is a good mechanism and enables you to use all of the standards defined within HTTP. The main issue with HTTP is that it is clear text and by default will not have encryption enabled.

The reality one has to deal with is that the number of instances of microservices can be enormous and the possible connections can be enormous in a complex landscape. This also means that each possible path, each network connection can be a potentially be intercepted. Having no HTTPS SSL encryption implemented makes intercepting network traffic much more easy.



It is a best practice to ensure all of your connections are by default enabled, to do so it will be needed to make use of HTTPS instead of HTTP. Building your microservices deployment to only work with HTTPS and not with HTTP bring in a couple of additional challenges.

The challenge of scaling environments
In a microservices oriented deployment containers or virtual machines that provide instances of a microservice will be provisioned and de-provisioned in a matter of seconds. The issue that comes with this in relation to using HTTPS instead of HTTP is that you want to ensure that all HTTPS connections between the systems are baed upon valid certificates which are being created and controlled by a central certificate authority.

Even though it is a possibility that you have each service that is provisioned generate and sign its own certificate this is not advisable. using self signed certificates is considered in general as a not secure way of doing things. Most standard implementations of negotiating encryption between two parties do not see a self-signed certificate as a valid level of security. Even though you can force your code to accept a self-signed certificate and make it work you will be able to ensure encryption on the protocol level is negotiated and used, however, you will not be able to fully be assured that the other party is not a malicious node owned by an intruder.

To ensure that all instances can verify that the other instance they call is indeed a trusted party and to ensure that encryption is used in the manner it is intended you will have to make use of a certificate authority. A certificate authority is a central "bookkeeper" that will provide certificates to parties needing one and the certificate authority will provide the means to verify that a certificate that is offered during encryption negotiation is indeed a valid certificate and belongs to the instance that provides this certificate.

The main issue with using a certificate authority to provide signed certificates is that you will have to ensure that you have a certificate authority service in your landscape capable of generating and providing new certificates directly when this is needed.

In general, as we look at the common way certificates are signed and handed out, it is a tiresome process which might involve third parties and/or manual processing. Within a environment where signed certificates are needed directly and on the fly this is not a real option. This means, requesting signed certificates from the certificate authority needs to be direct and preferably based upon a REST API.

Certificate authority as a service
When designing your microservices deployment while making use of HTTPS and certificates signed by a certificate authority you will need to have the certificate authority as a service. The certificate authority as a service should enable services to request a new certificate when they are initialized. A slightly other alternative is that your orchestration tooling is requesting the certificate on behalf of the service that needs to be provisioned and provides the certificate during the provisioning phase.

In both cases you will need to have the option to request a new certificate, or request a certificate revocation when the service is terminated, via a REST API.

The below diagram shows on a high level the implementation of a certification authority as a service which enables (in this example) a service instance to request a signed certificate to be used to ensure the proper way of initiating HTTPS connections with assured protocol level encryption.


To ensure a decoupling between the microservices and the certificate authority we do not allow direct interaction between the microservice instances and the certificate authority. From a security point of view and a decoupling and compartmentalizing point of view this is a good practice and adds additional layers of security within the overall footprint.

When a new instance of a microservice is being initialized, this can be as a docker container in the Oracle Container Cloud Service or this can be as a virtual machine instance in the Oracle Compute Cloud Service, the initialization will request the certificate microservice for a new and signed certificate.

The certificate microservice will request a new certificate by calling the certificate authority server REST API on behalf of the initiating microservice. The answer provided back by the certificate authority is passed through by the certificate microservice towards the requesting party. In addition to, just being a proxy, it is good practice to ensure you certificate microservice will do a number of additional verification to see if the requesting party is authorized to request a certificate and to ensure the right level of auditing and logging is done to provide a audit trail.

Giving the CA a REST API
When exploring certificate authority implementations and solutions it will become apparent that they have been developed, in general, without the need for a REST API in mind. As the concept of the certificate authority is already in place long before microservice concepts came into play you will find that the integration options are not that well available.

An exception to this is the CFSSL, CloudFlare Secure Socket Layer, project on Github. The CFSSL project provides an opensource and free PKI toolkit which provides a full set of rich REST API's to undertake all required actions in a controlled manner.

As an example, the creation of a new certificate can be done by sending a JSON payload to the CFSSL REST API, the return message will consist out of a JSON file which contains the cryptographic materials needed to ensure the requesting party can enable HTTPS. Below you will notice the JSON payload you can send to the REST API. This is a specific request for a certificate for the ms001253 instance located in the Oracle Compute Cloud Service.

{
 "request": {
  "CN": "ms001253.compute-acme.oraclecloud.internal",
  "hosts": ["ms001253.compute-acme.oraclecloud.internal"],
  "key": {
   "algo": "rsa",
   "size": 2048
  },
  "names": [{
   "C": "NL",
   "ST": "North-Holland",
   "L": "Amsterdam",
   "O": "ACME Inc."
  }]
 }
}

As a result you will be given back a JSON payload containing all the required information. Due to the way CFSSL is build you will have the response almost instantly. The combiantion of having the option to request a certificate via a call to a REST API and getting the result back directly makes it very usable for cloud implementations where you scale the number of instances (VM's, containers,..) up or down all the time.

{
 "errors": [],
 "messages": [],
 "result": {
  "certificate": "-----BEGIN CERTIFICATE-----\nMIIDRzCCAjGgAwIBAg2 --SNIP-- 74m1d6\n-----END CERTIFICATE-----\n",
  "certificate_request": "-----BEGIN CERTIFICATE REQUEST-----\nMIj --SNIP-- BqMtkb\n-----END CERTIFICATE REQUEST-----\n",
  "private_key": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIJfVVIvN --SNIP-- hYYg==\n-----END EC PRIVATE KEY-----\n",
  "sums": {
   "certificate": {
    "md5": "E9308D1892F1B77E6721EA2F79C026BE",
    "sha-1": "4640E6DEC2C40B74F46C409C1D31928EE0073D25"
   },
   "certificate_request": {
    "md5": "AA924136405006E36CEE39FED9CBA5D7",
    "sha-1": "DF955A43DF669D38E07BF0479789D13881DC9024"
   }
  }
 },
 "success": true
}

The API endpoint for creating a new certificate will be /api/v1/cfssl/newcert however CFSSL provides a lot more API calls to undertake a number of actions. One of the reasons the implementation of the intermediate microservice is that it can ensure that clients cannot initiate some of those API calls whithout having the need to change the way CFSSL is build.

The below overview shows the main API endpoints that are provided by CFSSL. A full set of documentation on the endpoints can be found in the CFSSL documentation on Github.

  • /api/v1/cfssl/authsign
  • /api/v1/cfssl/bundle
  • /api/v1/cfssl/certinfo
  • /api/v1/cfssl/crl
  • /api/v1/cfssl/info
  • /api/v1/cfssl/init_ca
  • /api/v1/cfssl/newcert
  • /api/v1/cfssl/newkey
  • /api/v1/cfssl/revoke
  • /api/v1/cfssl/scan
  • /api/v1/cfssl/scaninfo
  • /api/v1/cfssl/sign


Certificate verification
One of the main reasons we stated one should ensure that you do not use self-signed certificates and why you should use certificates from a certificate authority is that you want to have the option of verification.

When conducting a verification of a certificate, checking if the certificate is indeed valid and by doing so getting an additional level of trust you will have to verify the certificate received from the other party with the certificate authority. This is done based upon OCSP or Online Certificate Status Protocol. A simple high level example of this is shown in the below diagram;

Within the high level diagram as shown above you can see that:

  • A service will request a certificate from the certificate microservice during the initialization phase
  • The certificate microservice requests a ceretificate on behalf at the certificate authority
  • The certificate authority sends the certificate back to the certificate microservice after which it is send to the requesting party
  • The requesting party uses the response to include the certificate in the configuration to allow HTTPS traffic


As soon as the instance is up and running it is eligible to receive requests from other services. As an example; if example service 0 would call example service 2 the first response during encryption negotiation would be that example service 2 sends back a certificate. If you have a OCSP responder in your network example service 1 can contact the OCSP responder check the validity of the certificate received from example service 2. If the response indicates that the certificate is valid one can assume that a secured connection can be made and the other party can be trusted

Conclusion
implementing and enforcing that only encrypted connections are used between services is a good practice and should be on the top of your list when desiging your microservices based solution. One should include this int he first stage and within the core of the architecture. Trying to implement a core security functionality at a later stage is commonly a cumbersome task.

Ensuring you have all the right tools and services in place to ensure you can easily scale up and down while using certificates is something that is vital to be successful.

Even though it might sounds relative easy to ensure https is used everywhere and in the right manner it will require effort to ensure it is done in the right way and it will become and asset and not a liability.

When done right it a ideal addition to a set of design decisions for ensuring a higher level of security in microservice based deployments.

Wednesday, March 22, 2017

Oracle Linux - Short Tip 6 - find memory usage per process

Everyone operating a Oracle Linux machine, or any other operating system for that matter, will at a certain point have to look at memory consumption. The first question when looking at memory consumption during a memory optimization project is the question, which process is using how much memory currently. Linux provides a wide range of tools and options to gain insight in all facets of system resource usage.

For those who "just" need to have a quick insight in the current memory consumption per process on Oracle Linux the below command can be extremely handy:

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'

It will provide a quick overview of the current memory consumption in mb per process.

[root@devopsdemo ~]# ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'
         0.00 Mb COMMAND
       524.63 Mb /usr/sbin/console-kit-daemon --no-daemon
       337.95 Mb automount --pid-file /var/run/autofs.pid
       216.54 Mb /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
         8.81 Mb hald
         8.46 Mb dbus-daemon --system
         8.36 Mb auditd
         2.14 Mb /sbin/udevd -d
         2.14 Mb /sbin/udevd -d
         1.38 Mb crond
         1.11 Mb /sbin/udevd -d
         1.04 Mb ps -eo size,pid,user,command --sort -size
         0.83 Mb sshd: root@pts/0
         0.74 Mb cupsd -C /etc/cups/cupsd.conf
         0.73 Mb qmgr -l -t fifo -u
         0.73 Mb login -- root
         0.65 Mb /usr/sbin/abrtd

The overview is extremely usefull when you need to quickly find the processes that consume the most memory or memory consuming processes which are not expected to use (this much) memory. 

Tuesday, March 21, 2017

Oracle Cloud - architecture blueprint - Central logging for microservices

When you engage in developing a microservices architecture based application landscape at one point in time the question about logging will become apparent. When starting to develop with microservices you will see that there are some differences with monolithic architectures that will drive you to rethink your logging strategy. Where we will have one central server, or a cluster of servers where the application is running within a monolithic architecture you will see in a microservices architecture you will have n nodes, containers, instances and services.

In a monolithic architecture you will see that most business flows run within a single server and end-to-end logging will be relative simple to implement and later to correlate and analyze. If we look at the below diagram you will see that a a call to the API gateway can result in calls to all available services as well as in the service registry. This also means that the end-to-end flow will be distributed over all different services and logging will for some parts also be done on each individual node and not in one central node (server) as it is the case in a monolithic application architecture.



When deploying microservices in, for example, the Oracle Public Cloud Container Cloud Service it will be a good practice to ensure that each individual docker container as well as the microservice push the logging to a central API which will receive the log files in a central location.

Implement central logging in the Container Cloud Service
The difference between the logging from the microservice  and the Docker container deployed in the Oracle Public Cloud Container Cloud Service is that the microservice will be sending specific logging of the service which is specific developed during the development of the service and which is being send to a central logging API. This can include technical logging as well as functional business flow logging which can be used for auditing.

In some applications the technical logging is specifically separated from the business logging. This to ensure that business information is not available to technical teams and can only be accessed by business users who need to undertake an audit.

Technical logging on container logging is more the lower level logging which is generated by docker and the daemon providing the needed services to enable to run the microservice.


The above diagram shows the implementation of an additional microservice for logging. This microservice will provide a REST API capable of receiving JSON based logging. This will ensure that all microservices will push the logging to this microservice.

When developing the mechanism which will push the log information, or audit information, to the logging microservice it is good to ensure that this is a forked logging implementation. More information on forked logging and how to implement this while preventing execution delay in high speed environments can be found in this blogpost where we illustrate this with a bash example.

Centralize logging with Oracle Management Cloud
Oracle provides, as part of the Public Cloud portfolio, the Oracle Management Cloud and as part of that it provides Log Analytics. When developing a strategy for centralized logging of your microservices you can have the receiving logging microservice push all logs to a central consolidation server in the Oracle Compute Cloud. You can have the Oracle Management Cloud Log Analytics service collect this and include this in the service provided by Oracle.

An example of this architecture is show on a high level in the below diagram.


The benefit of the Oracle Management Cloud is that it will provide an integrated solution which can be included withe other systems and services running in the Oracle Cloud, any other cloud or your traditional datacenter.


An example of the interface which is provided by default by the Oracle Management cloud is shown above. This framework can be used to collect logging and analyze it for both your docker containers, your microservices as well as other services deployed as part of the overall IT footprint.

The downside for some architects and developers is that you have to comply with a number of standards and methods defined in the solution by Oracle. The upside is that a large set of analysis tooling and intelligence is pre-defined and available outside of the box.

Centralize logging with the ELK stack
Another option to consolidate logging is making use of non-Oracle solutions. Splunk comes to mind, however, for this situation the ELK stack might be more appropriate. The ELKS stack consists out of ElasticSearch, Logstash and Kibana complimented with Elastic beats and the standard REST API's.

The ELK stack provides a lot more flexibility to developers and administrators however requires more understanding of how to work with ELK. The below image shows a high level representation of the ELK stack in combination with Beats.


As you can see in the above image there is a reservation for a {Future}beat. This is also the place where you can deploy your own developed Beat, you can also use this method to do a direct REST API call to logstash or directly to Elasticsearch. When developing a logging for microservices it might be advisable to directly store the log data into elasticsearch from within the code of the microservice. This might result in a deployment as shown below where the ELK stack components, including Kibana for reporting and visualization are deployed in the Oracle Compute Cloud Service.

This will result in a solution where all log data is consolidated in Elasticsearch and you can use Kibana for analysis and visualization. You can see a screenshot from Kibana below.


The upside in using the ELK stack is that you will have full freedom and possibly more ease in developing more direct influence in integration. The downside is, you will need to do more yourself and need a deeper knowledge of your end-to-end technology (not sure if that is a real bad thing).

Conclusion
when you start developing an architecture for microservices you will need to have a fresh look on how you will do logging. You will have to understand the needs of both your business as well as your DevOps teams. Implementing logging should be done in a centralized fashion to ensure you have a good insight in the end-to-end business flow as well as all technical components.

The platform you select for this will depend on a number of factors. Both solutions outlined in the above post show you some of the benefits and some of the downsides. Selecting the right solution will require some serious investigation. Ensuring you take the time to make this decision will pay back over time and should not be taken lightly. 

Friday, March 17, 2017

Oracle Linux - short tip #5 - check last logins

Need to quickly check how logged into a specific Oracle Linux machine and from where the logged into the system. You can use the last command to make that visible. In effect last will read the file /var/log/wtmp and display it in a human readable manner. If you would do a cat on /var/log/wtmp you might notice that this is not the most "easy" way of getting your information.

As an example if you execute last without any parameters you might see something like the below:
[root@temmpnode ~]# last -a
opc      pts/3        Fri Mar 17 08:42   still logged in    61.113.181.37
opc      pts/3        Fri Mar 17 07:45 - 07:45  (00:00)     61.113.181.37
opc      pts/2        Fri Mar 17 07:14 - 09:24  (02:10)     61.113.181.37
opc      pts/1        Fri Mar 17 07:09   still logged in    61.113.181.37
opc      pts/0        Fri Mar 17 07:03   still logged in    61.113.181.37


The last command has a number of parameters that can make your life more easy when trying to find out who did log into the system.

-f file
Tells last to use a specific file instead of /var/log/wtmp.

-num   
This is a count telling last how many lines to show.

-n num 
The same as -num

-t YYYYMMDDHHMMSS
Display  the  state of logins as of the specified time.  This is useful, e.g., to determine easily who was logged in at a particular time -- specify that time with -t and look for "still logged in".

-f file
Specifies a file to search other than /var/log/wtmp.

-R    
Suppresses the display of the hostname field.

-a    
Display the hostname in the last column. Useful in combination with the next flag.

-d    
For non-local logins, Linux stores not only the host name of the remote host but its IP number as well. This  option  translates  the  IP number back into a hostname.

-F    
Print full login and logout times and dates.

-i    
This  option is like -d in that it displays the IP number of the remote host, but it displays the IP number in numbers-and-dots notation.

-o    
Read an old-type wtmp file (written by linux-libc5 applications).

-w    
Display full user and domain names in the output.

-x    
Display the system shutdown entries and run level changes.

Thursday, March 09, 2017

Oracle Cloud - Backup Jenkins to the Oracle Cloud

If you are using Jenkins as the automation server in your build and DevOps processes it most likely is becoming an extremely valuable asset. It is very likely that you have a large number of processes automated and people have been spending a large amount of time to develop scripting, plugins and automation to ensure that your entire end-2-end process works in the most optimal manner.

In case Jenkins forms a critical role in your IT footprint you will most likely have a number of Jenkins servers working together to execute all the jobs you require to be executed. This means that if one node fails you will not have an issue. However, if you would loose a site or you would loose a storage appliance you do want to have a backup.

Making a backup of Jenkins is relative easy. In effect all artifacts to rebuild a Jenkins server to a running solution are stored in Jenkins home. This makes it extremely easy from a backup point of view. However, keeping backups in the same datacenter is never a good idea. For this you would like to backup Jenkins to another location.

Making the assumption you run Jenkins within your own datacenter, a backup target can be the Oracle Cloud. If you run your Jenkins server already in the Oracle Cloud, you can backup Jenkins to another cloud datacenter.

Backup Jenkins to the Oracle Storage cloud Service
As stated, the Jenkins objects are stored as files which makes that you can very simply create a backup. If you want to backup to the Oracle Storage cloud this would take in effect two steps which both can be scripted and periodically be execute.

Ensure you package all the content of your Jenkins directory. We assume you have all your information stored in the default location when installing Jenkins on Oracle Linux. This is /var/lib/jenkins . This means that we should package the content of this location and after that transport it to the Oracle storage Cloud Service.

The backup can be done by using the below example command which will create a .tar.gz file in the /tmp directory. The file will contain the epoch time stamp to ensure it is really unique.

tar -zcvf jenkins_backup_$(date +%s)_timestamp /var/lib/jenkins

After we have created the .tar.gz file we will have to move it to the Oracle Storage Cloud. To interact with the Oracle Storage Cloud and push a file to the Oracle Storage Cloud you can use the Oracle Storage Cloud File Transfer Manager command-line interface (FTM CLI). For more background information and more advanged features (like for example retention and such) you can refer to the FTM CLI documentation.

As a simple example we will upload the file we just created to a container in the Oracle Storage Cloud named JenkinsBackup.

java -jar ftmcli.jar upload -N jenkins_backup_1489089140_timestamp.tar.gz JenkinsBackup /tmp/jenkins_backup_1489089140_timestamp.tar.gz

Now we should have the file securely stored in the Oracle Storage Cloud and ready to be retrieved when needed. As you can see the above command will take a number of additional actions when you want to create a full scripted version of this. You will also have to make sure that you have the right configuration for the ftmcli stored in a ftmcli.properties file and you define you want to make use of the backup option and retention times in the backup cloud.

However, when done, you have the assurance that your backups are written to the Oracle Cloud and will be available in case of a disaster.

Backup Jenkins to the Oracle Developer Cloud Service.
As we know.... Jenkins and GIT are friends... so without a doubth it will not come as a supprise that you can also backup Jenkins to a GIT repository. The beauty of this is that Oracle will provide you a GUT repository as part of the Oracle Developer Cloud Service.

This means that you can backup Jenkins directly into the Oracle Developer Cloud Service if you want. Even though the solution is elegant, I do have a personal preference for the backup in the file based manner.

However, for those wanting to explore the options to backup to a GIT repository in the Oracle Developer Cloud Service, a plugin is available which can be used to undertake this task. You can find the plugin on this page on the Jenkins website.

Oracle Linux – Install Gitlab on Oracle Linux

Even though Oracle is providing the option to use GIT from within the Oracle Developer Cloud service there are situations where you do want to use your own GIT installation. For example situations where you need a local on premise installation for storing information in GIT where you are not allowed to store to information outside of the organization datacenter. Or, in situations where you need the additional level of freedom to undertake specific actions which are not always allowed by the Oracle Developer Cloud Service.

In effect GIT will be, just GIT, without a graphical user interface and additional functionality which makes live much more easy for developers and administrators. One of the solutions which would be fitting for deploying your own GIT repository on Oracle Linux with a full and rich set of options and a graphical user interface in the form of a web interface is GitLab.

GitLab functionality
When adopting GitLab you will get a lot more functionality opposed to “just” running git on your server. To name a couple of the features that will be introduced by GitLab see the below examples:

  • Organize your repositories into private, internal or public projects
  • Manage access and permissions with different user roles and settings for internal and external users
  • Create Websites for your GitLab projects, groups and users
  • Unlimited public and private repos, create a new repo for even the smallest projects
  • Import existing projects from GitHub, BitBucket, Google Code, Fogbugz, or any git repo with a URL.
  • Protected branches, control read/write permissions to specific branches.
  • Keep your documentation within the project using GitLab’s built-in wiki system.
  • Collect and share reusable code with code Snippets
  • Control GitLab with a set of powerful APIs.


As you can see from the image above, GitLab will provide you a full web GUI to use by administrators as well as end-users in your organization.

Install GitLab on Oracle Linux
Installation of  GitLab on Oracle Linux is relative easy. Assuming you have a standard Oracle Linux 6 installation available for deplying GitLab the below steps should be undertaken to ensure you have a full working GitLab environment.

Make sure you have the dependencies installed on your system. This can be done with the below commands:

sudo yum install curl openssh-server openssh-clients postfix cronie
sudo service postfix start
sudo chkconfig postfix on
sudo lokkit -s http -s ssh

Ensure that you have the GitLab YUM repository available so we can install GitLab with YUM.

curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash
sudo yum install gitlab-ce

Now we can issue the reconfigure command to ensure that GitLab is configured fully for your specific host.

sudo gitlab-ctl reconfigure

If all the steps are completed without any issue you will be able to navigate with a browser to your machine and access GitLab on the default port, which is port 80.

Wednesday, March 08, 2017

Oracle Cloud – moving to a software defined cloud

When companies move from a traditional on premise IT footprint to a cloud based footprint this introduces a major change for the IT department. Where traditional IT departments are used to owning all assets and hosting it the company’s datacenter the physical assets are now owned by the cloud provider and the physical datacenter is largely off limits for customers. This means that all assets should be seen as virtual assets.

Traditional view 
Where processes and procedures in traditional on premise IT footprint are still largely based upon the more physical principles and not the virtual principles you see that a large part of processes and procedures include manual work. This includes manually change firewalls, manually plug network cables and for parts manually install operating systems and applications.

Even though a raised adoption of solutions like Puppet and Chef has been introduced in traditional IT footprints over the years a large part of the IT footprint is not based upon the principle of software defined infrastructure also referred to as infrastructure as code.

Over the years a large number of companies have moved from bare-metal systems to a more virtualized environment, VMWare, Oracle VM and other virtualization platforms have been introduced. By being adopted into the footprint they have introduced a level of software defined networking and software defined storage with them.

While visiting a large number of customers and supporting them with their IT footprints from both a infrastructure point of view as well as an application point of view I have seen that a large number of companies adopt those solutions as silo solutions. Solutions like Oracle Enterprise Manager, Oracle VM manager and VCenter from VMWare are used. In some situations cases customers have included Puppet and/or Chef. However, only a fraction of the companies do make use of the real advantages that are available and couple all the silo based solutions into an end-2-end chain.

The end-2-end chain
The end-2-end chain in a software defined IT footprint is the principle where you couple all the silo based solutions, management tooling, assets, applications and configuration into one automated solution. This holds that everything what you do, everything you build, deploy or configure is described in machine readable formats and used to automatically deploy the changes or new builds.

This also holds that everything is under version control, from your firewall settings to the virtual machines you deploy and applications and application configuration. Everything is stored under version control and is repeatable.

This also holds that in effect your IT staff has no direct need to be in the datacenter or execute changes manually. Changing configuration and pushing this into the full end-2-end automation stack which will take the needed actions based upon the infrastructure as code principle.

The difficulty with on premise infrastructure as code
One of the main challenges while implementing infrastructure as code in an existing on premise IT footprint is that the landscape has grown organically over the years. Due to the model in which IT footprints organically grow in the majority of companies you will see that a large number of solutions have been implemented over time. All doing their part in the total picture and deployed the moment they where needed.

The issue this is causing is that in most cases the components are selected only based upon the functionality they provide while not taking into account how they can be integrated in an end-2-end chain.

This makes that, in comparison to a deployment in a cloud, the implementation of a full end-2-end software defined model can become relatively hard and will require an increasing number of custom written scripts and integration models which are not always providing the most optimal way that one would like to achieve.

Building the software defined cloud model
When moving to a cloud based solution such as the Oracle Public Cloud a couple of advantages are directly present.

  • Companies are forced to rethink their strategies
  • Cloud will be in most cases a green field in comparison to the brown field of exsiting on premise IT footprints
  • Cloud, Oracle Public Cloud, provides standard all the components and interfaces required to adopt a full software defined model. 

In cases where a company starts to adopt the Oracle Public Cloud as the new default location to position new systems and solutions this means that the adoption of a software defined model becomes much easier.

All components that are used as the building blocks for the cloud are by default accessible by making use of API’s. Everything is developed and driven in a way that it will be able to hook into automation tooling. Providing the options to do full end-2-end software defined orchestration, deployment and maintenance of all assets.

While adopting a software defined model and while adopting automation and orchestration to a new level the same ground rule applies as for DevOps. For both software defined cloud automation and orchestration, just as for DevOps, there is no single recipe. Selecting the right tools for the job will be depending on what a company intends to achieve, what integrates the best with specific other tooling that is needed in the overall IT landscape.

Having stated that, everyone who starts looking into adopting a full software defined cloud model and adopting automation and orchestration in an end-2-end fashion the following toolsets are very much interest and should be evaluated and selected based upon their use and level of integration

  • TerraForm & Oracle Terraform provider
    • Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. The Oracle Terraform provider provides the connection between Terraform and the Oracle Public Cloud API’s
  • Jenkins
    • Jenkins is an open source automation server written in Java. Originally developed as a build server it currently is one of the main building blocks for companies who intend to build automation pipelines (chains). Providing a large set of plugins and the option to develop your own plugins and custom scripting it is currently becoming a tool of choice for a lot of companies.
  • Ansible / Puppet/ Chef
    • Ansibel: Ansible is an open-source automation engine that automates cloud provisioning, configuration management, and application deployment.
    • Puppet: Puppet is, among other things, an open-source software configuration management tool for central configuration management of large IT deployments.
    • Chef: Chef is, among other things, is a configuration management tool written in Ruby and Erlang for central configuration management of large IT deployments.
    • Without doing honor to the individual solutions we name them as one item in this blogpost. Each solution has specific additional usecases and additional benefits, however, in general the main use for all the solutions is to support during the automatic deployment (installation) of operating systems and applications as well as manage configuration over large numbers of systems which are deployed in a centralized manner.  
  • Oracle PaaS Service Manager Command Line Interface
    • The full CLI interface to the Oracle Cloud PaaS offerings which provides the option to fully automate the Oracle 
  • Bash / Python
    • Even with all the products and plugins in many cases a number of things desired in an end-2-end automation are so very specific that it needs to be scripted. For this a wide range of programming languages are available where Python and the Linux scripting language Bash have a strong foothold with respect to a lot of other popular languages. 

Defining your goal, selecting the tools and ensuring that you are able to make the best possible use of the cloud by adopting a full end-2-end software defined cloud will ensure you can benefit optimal from the options current technology is providing you. 

Sunday, March 05, 2017

Oracle Linux - perf - error while loading shared libraries: libdw.so.1

When using a standard Oracle Linux template based installation on the Oracle Public cloud and you try to start the perf command you will be hit by an error. Reason for this is that the perf command in part of the deployment however in a broken form. The libdw.so.1 is missing which is needed to start perf. For this reason we have to ensure that libdw.so.1 is available on the system.

libdw.so.1 is part of the elfutils lib, meaning you will have to install elfutils with yum. elfutils is a collection of utilities and libraries to read, create and modify ELF binary files, find and handle DWARF debug data, symbols, thread state and stacktraces for processes and core files on GNU/Linux.

Executable and Linkable Format (ELF, formerly called Extensible Linking Format) is a common standard file format for executables, object code, shared libraries, and core dumps. First published in the System V Release 4 (SVR4) Application Binary Interface (ABI) specification, and later in the Tool Interface Standard, it was quickly accepted among different vendors of Unix systems. In 1999 it was chosen as the standard binary file format for Unix and Unix-like systems on x86 by the 86open project.

In effect the issue you will see is the following prior to fixing the issue:

[opc@jenkins-dev 1]$ perf
/usr/libexec/perf.3.8.13-118.14.2.el6uek.x86_64: error while loading shared libraries: libdw.so.1: cannot open shared object file: No such file or directory
[opc@jenkins-dev 1]$

To install the needed package you can make use of the standard Oracle Linux YUM repository and execute the below command:

yum -y install elfutils

Now you can check that the needed file is present on the system as shown below:

[root@jenkins-dev ~]# ls -la /usr/lib64/libdw.so.1
lrwxrwxrwx 1 root root 14 Mar  5 11:04 /usr/lib64/libdw.so.1 -> libdw-0.164.so
[root@jenkins-dev ~]#

This will also make that if you want to start perf you will no longer be facing an issue and you will have the full capability of perf when needed:

[root@jenkins-dev ~]# perf

 usage: perf [--version] [--help] COMMAND [ARGS]

 The most commonly used perf commands are:
   annotate        Read perf.data (created by perf record) and display annotated code
   archive         Create archive with object files with build-ids found in perf.data file
   bench           General framework for benchmark suites
   buildid-cache   Manage build-id cache.
   buildid-list    List the buildids in a perf.data file
   diff            Read two perf.data files and display the differential profile
   evlist          List the event names in a perf.data file
   inject          Filter to augment the events stream with additional information
   kmem            Tool to trace/measure kernel memory(slab) properties
   kvm             Tool to trace/measure kvm guest os
   list            List all symbolic event types
   lock            Analyze lock events
   record          Run a command and record its profile into perf.data
   report          Read perf.data (created by perf record) and display the profile
   sched           Tool to trace/measure scheduler properties (latencies)
   script          Read perf.data (created by perf record) and display trace output
   stat            Run a command and gather performance counter statistics
   test            Runs sanity tests.
   timechart       Tool to visualize total system behavior during a workload
   top             System profiling tool.
   trace           strace inspired tool
   probe           Define new dynamic tracepoints

 See 'perf help COMMAND' for more information on a specific command.

[root@jenkins-dev ~]#

Oracle Linux - prevent sed errors when replacing URL strings

When scripting in bash under Oracle Linux and in need to search and replace strings in a text file the sed command is where most people turn to. reason for this is that sed is a stream editor for filtering and transforming text and makes it ideal for this purpose.

I recently started developing a full end to end automation and integration for supporting projects within our company to work with the Oracle public cloud. One of the options to do automation with the Oracle cloud is using Terraform. Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. Which means, Terraform helps you to make full use of infrastructure as code when working with the Oracle Cloud.

One of the simple bugs in my code I encountered was "strange" error somewhere in the functions developed to create a Terraform plan. In effect the Terraform base plan we developed was a plan without any specifics. One of the specifics needed was the API endpoint of the Oracle Public Cloud which needed to be changed from a placeholder into the real value when provided by a Jenkins build job.

The initial unit testing was without any issue while using random values, however, every time a valid URL format was used the code would break and the Jenkins build responsible for building the Terraform plan for the Oracle cloud would end up as a broken build.

The error message received was the following:

sed: -e expression #1, char 26: unknown option to `s'

Reason for this was the original construction of the sed command used in the code. Orignally used was the below sed command to replace the ##OPC_ENDPOINT## with the actual API endpoint for the Oracle public cloud.

sed -i -e "s/##OPC_ENDPOINT##/$endpoint/g" terraformplan.tf

Due to the way we use / in this command we have an issue if we populate the $endpoint with a URL which also contains a / character.  The fix is rather simple, when you know it. if you use sed to work with URL's you should use a , and not /. Meaning, your code should look like the one below to do a valid replace with sed

sed -i -e "s,##OPC_ENDPOINT##,$endpoint,g" terraformplan.tf

Wednesday, March 01, 2017

Oracle Linux - Install Google golang

Go (often referred to as golang) is a free and open source programming language created at Google. Even though Go is not the most popular programming language arround at this moment (sorry for all the golang people) there are still a lot of opensource projects that depend on Go. The installation from go is realtive simple however different from what the average Oracle Linux user used to do everything with yum command might expect.

If you want to install golang you will have to download the .tar.gz file and "install" it manually. The following steps are needed to get golang on your Oracle Linux machine:

Step 1
Download the file from the golang website

[root@jenkins-dev tmp]# curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 85.6M  100 85.6M    0     0  7974k      0  0:00:10  0:00:10 --:--:-- 10.1M
[root@jenkins-dev tmp]#

Step 2
Execute a checksum and verify the result with what is mentioned on the golang download site.

[root@jenkins-dev tmp]# sha256sum go1.8.linux-amd64.tar.gz
53ab94104ee3923e228a2cb2116e5e462ad3ebaeea06ff04463479d7f12d27ca  go1.8.linux-amd64.tar.gz
[root@jenkins-dev tmp]#

Step 3
Unpack the file into /usr/local

[root@jenkins-dev tmp]# tar -C /usr/local/ -xzf go1.8.linux-amd64.tar.gz

Step 4
verify that go in in the right location

[root@jenkins-dev tmp]# ls -la /usr/local/go
total 168
drwxr-xr-x  11 root root  4096 Feb 16 14:29 .
drwxr-xr-x. 13 root root  4096 Mar  1 14:47 ..
drwxr-xr-x   2 root root  4096 Feb 16 14:27 api
-rw-r--r--   1 root root 33243 Feb 16 14:27 AUTHORS
drwxr-xr-x   2 root root  4096 Feb 16 14:29 bin
drwxr-xr-x   4 root root  4096 Feb 16 14:29 blog
-rw-r--r--   1 root root  1366 Feb 16 14:27 CONTRIBUTING.md
-rw-r--r--   1 root root 45710 Feb 16 14:27 CONTRIBUTORS
drwxr-xr-x   8 root root  4096 Feb 16 14:27 doc
-rw-r--r--   1 root root  5686 Feb 16 14:27 favicon.ico
drwxr-xr-x   3 root root  4096 Feb 16 14:27 lib
-rw-r--r--   1 root root  1479 Feb 16 14:27 LICENSE
drwxr-xr-x  14 root root  4096 Feb 16 14:29 misc
-rw-r--r--   1 root root  1303 Feb 16 14:27 PATENTS
drwxr-xr-x   7 root root  4096 Feb 16 14:29 pkg
-rw-r--r--   1 root root  1399 Feb 16 14:27 README.md
-rw-r--r--   1 root root    26 Feb 16 14:27 robots.txt
drwxr-xr-x  46 root root  4096 Feb 16 14:27 src
drwxr-xr-x  17 root root 12288 Feb 16 14:27 test
-rw-r--r--   1 root root     5 Feb 16 14:27 VERSION
[root@jenkins-dev tmp]#

Step 5
add golang to your $path variable to make it available system wide and check if you can use go

[root@jenkins-dev tmp]#
[root@jenkins-dev tmp]# go --version
-bash: go: command not found
[root@jenkins-dev tmp]#
[root@jenkins-dev tmp]# PATH=$PATH:/usr/local/go/bin
[root@jenkins-dev tmp]#
[root@jenkins-dev tmp]# go version
go version go1.8 linux/amd64
[root@jenkins-dev tmp]#

This in effect would ensure that you now have the option to use Golang on your Oracle Linux system.

Thursday, February 23, 2017

Oracle Cloud - create storage volumes with JSON orchestration

Creating storage volumes in the Oracle Compute Cloud can be done in multiple ways. The most simple way is using the web console and following the guided way of creating a new storage volume. However, when you intend to automate this and integrate this in a continuous delivery model the manual way of doing things is not really working. In that case you will have to look into how you can create storage volumes based upon orchestrations. Orchestrations are JSON based building instructions to create objects in the Oracle Compute Cloud.

You can manually upload and start orchestrations or you can use the REST API to create a orchestration and start it.  In both cases you will need to understand how to craft a correct JSON file that will create your storage volume for you.

Storage volume JSON
The below JSON message shows the entire orchestration file used to create a storage volume named oplantest1boot.

{
 "name": "/Compute-demoname/demouser@demoname.com/oplan_test1",
 "description": "oplan test 1",
 "relationships": [],
 "oplans": [
            {
             "label": "My storage volumes",
             "obj_type": "storage/volume",
             "objects": [{
                          "name": "/Compute-demoname/demouser@demoname.com/oplantest1boot",
                          "bootable": true,
                          "imagelist": "/oracle/public/OL_6.4_UEKR3_x86_64",
                          "properties": ["/oracle/public/storage/default"],
                          "size": "12884901888",
                          "description": "boot device for oplan test 1"
                        }]
            }
           ]
 }

The JSON file shown above can be broken down in two parts. We have the top-level attributes, top-level attributes contain the name and description of an orchestration, along with other information such as the relationship between objects defined in the orchestration, start and stop times for the orchestration, and the list of objects in the orchestration.

The top-level attributes construction will envelope one or more oplans (object plans). The oplan(s) are the description of the actual object or objects that will be created when the orchestration will be started.

Orchestration top level attributes
The top-level attributes part of the above example orchestration is shown below. As you can see we have removed the oplan for the storage creation to make it more readable.

{
 "name": "/Compute-demoname/demouser@demoname.com/oplan_test1",
 "description": "oplan test 1",
 "relationships": [],
 "oplans": [
            {
 
             ...OPLAN(S)...

            }
           ]
}

Orchestration attributes for storage volumes
The below shows the oplan which will create the actual storage volume. For readability we have shown this as a separate part outside of the context of the top level attribute.

            {
             "label": "My storage volumes",
             "obj_type": "storage/volume",
             "objects": [{
                          "name": "/Compute-demoname/demouser@demoname.com/oplantest1boot",
                          "bootable": true,
                          "imagelist": "/oracle/public/OL_6.4_UEKR3_x86_64",
                          "properties": ["/oracle/public/storage/default"],
                          "size": "12884901888",
                          "description": "boot device for oplan test 1"
                        }]
            }

As you can see we have a number of attributes that are specified. The main attributes you can specify for every oplan (not only for storage) are:

  • label
    • A text string describing your object plan. This can be everything as long as it is not exceeding 256 characters. 
  • obj_type
    • the obj_type attribute lets you define what type of objects will be created as part of this specific oplan. In our case we will create a storage volume which means we will have to use the "storage/volume" object type. For other object types you can refer to the Oracle documentation on this subject.
  • objects
    • Objects is the placeholder for an array of objects of the object type specified in the obj_type attribute. This means, if you need to create multiple storage objects you can all defined them within the object placeholder.
  • ha_policy 
    • The ha_policy attribute is optional and not shown in the example above. You can state monitor as a value for this or leave it out. When the HA policy for an object is set to monitor, if the object goes to an error state or stops unexpectedly, the orchestration changes to the Error state. However, the object isn’t re-created automatically.
As you can see in the above example the actual object in this specific object plan is of the object type storage/volume. The descriptive information about the specific volume is in the first object instance in the instance array. Here we describe the actual object. For readability we have shown this seperatly below;

{
 "name": "/Compute-demoname/demouser@demoname.com/oplantest1boot",
 "bootable": true,
 "imagelist": "/oracle/public/OL_6.4_UEKR3_x86_64",
 "properties": ["/oracle/public/storage/default"],
 "size": "12884901888",
 "description": "boot device for oplan test 1"
}

As you can see we have a number of attributes that are specified in the above section of the JSON message. These are the primary required attributes when creating a storage volume.

  • name
    • name is used to state the name of your storage volume. It needs to be constructed in the following manner /compute-identity_domain/user/name to ensure it is fully compatible and will be placed in the right location. 
  • size
    • Size can be given bytes, kilobytes, megabytes, gigabytes or terrabytes. The default is byets however every unit of measure can used by using an (uppercase or lowercase) identifyer like B, K, M, G or T. For example, to create a volume of size 10 gigabytes, you can specify 10G, or 10240M, or 10485760K, and so on. Where it need to be in the allowed range between from 1 GB and 2 TB, in increments of 1 GB.
  • properties
    • The properties section let you select the type of storage that you require. Currently the options available are standard and low latency storage. In case you are able to work with standard storage you can use /oracle/public/storage/default as a string value. In case you need low latency and high IOPS you can use /oracle/public/storage/latency as a string value.
  • description
    • A descriptive text string describing your storage volume. 
  • bootable
    • bootable is optional and wil indicate if this storage volume should be considered as the boot volume of a machine. The default is false and false will be used if the attribute is not specified. If bootable is set to true you have to provide the attribute imagelist and imagelist_entry. 
  • imagelist
    • Required when bootable is set to True. Name of machine image to extract onto this volume when created. In our example case this is a publicly available image from the images created by Oracle /oracle/public/OL_6.4_UEKR3_x86_64
  • imagelist_entry
    • the imagelist_entry attribute is used to specify the version of the image from the imagelist you want to use. The default value when not provided is 1. Do note, some Oracle Documentation states imagelistentry without the underscore, this is the wrong notation and you should use imagelist_entry (with the underscore) 
  • tags
    • tags is used to provide tags to the storage volume which can be used for administrative purposes.
Additionally you will have the option to use a snapshot to create an image. This can be used in a process where you want to clone machines using a storage snapshot. In this case you will have to use an existing snapshot and provide the following attributes to ensure the snapshot is restored in the new storage volume. 
  • snapshot
    • Multipart name of the storage snapshot from which you want to restore or clone the storage volume.
  • snapshot_id
    • ID of the parent snapshot from which you want to restore a storage volume.
  • snapshot_account
    • Account of the parent snapshot from which you want to restore a storage volume.
Using an orchestration
The above examples show you how you can create an orchestration for creating a storage volume. In most real-world cases you will mix the creation of a storage volume with the creation of other objects For example an instance where you will attach the storage at. 

However, as soon as you have a valid JSON payload you can upload this to the Oracle Public Cloud via the web interface or using an API. Orchestrations that have been uploaded to the cloud can be started and when completed they will result in the objects (storage volume in this case) to be created. 

Having the option to quickly create a JSON file as payload and send this to the Oracle Public Cloud highly supports the integration with existing automation tooling and helps in building automatic deployment and scaling solutions. 

Tuesday, February 21, 2017

Oracle Linux - Integrate Oracle Compute Cloud and Slack

In a recent post on this blog we already outlined how you can integrate Oracle Developer Cloud with Slack and receive messages on events that happen in the Oracle Developer Cloud in your channel. Having this integration will ensure that your DevOps teams are always aware of what is going on and have the ability to receive mobile updates on events and directly discuss them with other team members. Even though the integration with Slack is great it is only a part of the full DevOps chain you might deploy in the Oracle Public Cloud.

One of the other places you might want to have integration with Slack is the Oracle Compute Cloud Service. If we look at the below high level representation of a continues delivery flow in the Oracle Cloud we also see a "deployment automation" step. In this step new instance are created and a build can be deployed for unit testing, integration testing or production purposes.


In case you want to ensure your DevOps team is always aware of what is happening and you like to use Slack as one of the tools to enable your team to keep a tap on what is happening you should ensure integration with Slack in every step of the flow. This means, the creation of a new compute instance in the Oracle Compute Cloud should also report back to the Slack channel that the instance is created.

Creating a slack webHook
One of the things that you have to ensure if you want to have integration with Slack is that you have a slack webHook. A webHook is essentially an API endpoint where you can send your messages to and Slack will ensure that the message is promoted to the slack channel you are using as a DevOps team.

In the post where we described how to create a Slack webHook we already outlined how you can create a webHook that can be used. We will be using the same webHook in this example.


What is especially of importance when creating the integration between the Oracle Compute Cloud and slack is the part which is obfuscated in the above screenshot. This is the part of the webHook URL that is specific to your webHook and should look something like xxxxx/xxxxxx/xxxxxx . We refer to this as the slack_code in the scripting and the JSON payload when we start building our integration.

High Level Integrate
The main intend of this action is that we want to receive a message on Slack informing us when a new Oracle Linux compute instance has come online on the Oracle Compute Cloud Service. For this we will use a custom bash script. In this example we host the script in my personal github repository, however, in a real-life situation you most likely want to place this on a private location which you control and where you will not be depending on the github repository of someone else.

What in effect will happen is that we provide the creation process of a new instance with a set of user attributes in the JSON payload which is used for the orchestration process of creating a new instance. This part of the payload will be interpreted by the opc-init package which is shipped with all the standard Oracle images that are part of the Oracle Compute Cloud Service.

We will use some custom attributes to provide the script with the slack_code and the channel_name. We will also use the Prebootstrap attributes to state the location of the script that will communicate with slack as soon as the new instance is online.

Create an integrated instance
When creating a new instance on the Oracle Compute Cloud you can use the GUI or you can use the REST API to do so. In this example we will use the GUI, however, the same can be achieved by using the REST API.

When you create a new instance on the Oracle Compute Cloud you have the option to provide custom attributes to the creation process. The information provided needs to be in a JSON format and will be included in the overall orchestration JSON files. This is what we will use as the way to ensure the integration between the creation process of the new instance and slack.


What we will provide to the "custome attributes"field is the following JSON payload;

{
 "slack_code": "XXXXXXXXX/XXXXXXXXX/XXXXXXXXXXXXXXXXXXXX",
 "slack_channel": "general",
    "pre-bootstrap": {
                      "scriptURL": "https://github.com/louwersj/Oracle-Linux-Scripting/raw/master/oracle_cloud/compute_cloud/postToSlack/slackReportInstanceUp.sh",
                      "failonerror": true
         }   
}

As you can see we have a slack_code and a slack_channel. The slack_code will be used to place the code we got when we created the webHook on slack and the slack_channel will represent the channel in which we want to post the message.

The pre-bootstrap part holds the scriptUrl which tells opc-init on Oracle Linux where it needs to download the script which will be executed. The failonerror is currently set to true however in most cases you do want to have this on false.

If we start the instance creation with this additional JSON payload the script will be executed as soon as the opc-init downloads it during the boot procedure of the new Oracle Linux instance on the Oracle Compute Cloud. The script will take the input provided in the customer attributes by doing a call to the internal Oracle Cloud REST API. Next to this some additional information about the instance is collected by calling the meta-data REST API in the Oracle cloud.

This is effect will make sure that a message is posted to the slack channel you defined in the custom attributes. If we review the message we receive on slack we should see something like the example message below:

The bash scripting part
As already stated, the central part of this integration is based upon bash scripting currently hosted on github. In a real-world situation you would like to ensure you place this on a private server. However, it can very well be used for testing the solution. The bash script is available and released as open-source.

It will be downloaded and started on your Oracle Linux instance by opc-init based upon the information provided by the pre-bootstrap part in the custom attributes of your JSON payload and it will use some of the information provided in the same JSON payload.

Additionally it will retrieve meta-data about the instance from the internal REST-API for meta-data in the Oracle Compute Cloud. The combined information will be used to craft the message and send it to your slack channel. The code below is a version of the script which can be found in this location at github. Do note, the below version is not maintained and the latest version is only available on github.

#!/bin/bash
# NAME:
#  slackReportInstanceUp.sh 
#
# DESC:
#  To be used in combination with opc-init. The script will report
#  when a newly created instance is up on the Oracle Compute Cloud
#  into a slack channel. The information to be able to connect to
#  the right slack channel needs to be included in the userdata
#  part of the orchestration JSON file when created a new instance
#
#  This script is tested for Oracle Linux in combination with the 
#  Oracle public cloud / compute cloud.
#
# LOG:
# VERSION---DATE--------NAME-------------COMMENT
# 0.1       20FEB17     Johan Louwers    Initial creation
#
# LICENSE:
# Copyright (C) 2017  Johan Louwers
#
# This code is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This code is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this code; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
# *
# */
# Retrieve meta-data from the internal cloud API used to populate the slack message.
# includes the instance type, local IP and the local FQDN a registered in the OPC.
 metaDataInstanceType="$(curl -m 5 --fail -s http://192.0.0.192/1.0/meta-data/instance-type)"
 metaDataLocalIp="$(curl -m 5 --fail -s http://192.0.0.192/1.0/meta-data/local-ipv4)"
 metaDataLocalHost="$(curl -m 5 --fail -s http://192.0.0.192/1.0/meta-data/local-hostname)"

# Retrieve the information needed to connect to slack. This includes the name of your
# channel on slack as well as the code requried to access the incomming webHook at the
# slack website.
 channelName="$(curl -m 5 --fail -s http://192.0.0.192/1.0/user-data/slack_channel)"
 slackCode="$(curl -m 5 --fail -s http://192.0.0.192/1.0/user-data/slack_code)"

# set the slack message title
 msgTitle="Compute Cloud Service"

# Generte the slack message body, this is partially based upon the informtion which
# is retrieved from the meta-data api of the Oracle Public Cloud.
 msgBody="$(uname -s) instance $metaDataLocalHost is online with kernel $(uname -r). Sizing is : $metaDataInstanceType. Instance local cloud IP is $metaDataLocalIp"

# set the slack webhook url based upon a pre-defined first part and the slack code
# which we received from the user-data api from the Oracle Public Cloud. The info
# in the user-data is what you have to provide in the orchestration JSON file
# when provisioing a new instance on the Compute Cloud Service.
 slackUrl="https://hooks.slack.com/services/$slackCode"

# Generate the JSON payload which will be send to the slack webhook. This will
# contain the message we will post to the slack channel.
read -d '' payLoad << EOF
{
        "channel": "#$channelName",
        "username": "Compute Cloud Service",
        "icon_url": "https:\/\/github.com\/louwersj\/Oracle-Linux-Scripting\/raw\/master\/oracle_cloud\/compute_cloud\/postToSlack\/compute_cloud_icon.png",
        "attachments": [
            {
                "fallback": "$msgTitle",
                "color": "good",
                "title": "Instance $(hostname) is created",
                "fields": [{
                    "value": "$msgBody",
                    "short": false
                }]
            }
        ]
    }
EOF

# send the payload to the Slack webhook to ensure the message is posted to slack.
statusCode=$(curl \
        --write-out %{http_code} \
        --silent \
        --output /dev/null \
        -X POST \
        -H 'Content-type: application/json' \
        --data "${payLoad}" ${slackUrl})

echo ${statusCode}

In conclusion
The above example showcases another point of integration between the Oracle Cloud and Slack. As DevOps teams more and more start to adopt interactive ways to communicate with each other it is a good practice to support them in this. Most likely your DevOps teams are already using WhatsApp, Slack or other tools to communicate.

Helping them and giving them the options to also include automated messaging will support the overall goal and makes them more productive and life more fun. 

Sunday, February 19, 2017

Oracle Linux - Download code from github with the command line

Developers use Github and Git more and more and it is becoming one of the standard ways to store code in a repository. Where developers will have the need to interact with the code, write new code and refactor code other people have "just" the need to download the code and use it as part of a, for example, a deployment on a server. Downloading code from Github is working in exactly the same way as downloading code from you  Git repository you might have as a company.

When we have the need to get code from a Github repository on Oracle Linux we can use the git command. The git command is not by default installed on your Oracle Linux instance when you do a basic installation however it is available in the standard Oracle Linux YUM repository so you can install it using the yum command.

The below command will ensure that git will install without prompting the you to confirm you really want to install git

yum -y install git

Now that we have ensured we have install git we can use it to download code from github. For example, we want to have the Open Oracle Public Cloud API library which is hosted on github. In case you just want to download the master branch from the project hosted at github you have to check the main URL of the project, in our example case this URL is https://github.com/louwersj/OPC_API_LIB which means we can download (clone) the repository by making use of the URL https://github.com/louwersj/OPC_API_LIB.git which is exactly only the addition of .git to the URL. The below example shows the effect of a full git clone command

[root@a0d544 test]# git clone https://github.com/louwersj/OPC_API_LIB.git
Initialized empty Git repository in /tmp/test/OPC_API_LIB/.git/
remote: Counting objects: 68, done.
remote: Compressing objects: 100% (52/52), done.
remote: Total 68 (delta 30), reused 0 (delta 0), pack-reused 11
Unpacking objects: 100% (68/68), done.
[root@a0d544 test]# 

The above example shows how to download (clone) a repository master which should contain the latest stable release (in most cases). However, the concept of branches is used within Git and Github which provides the option to have "working versions (branches) of a project which might differentiate from the stable (master) branch.

The below image shows a simple example where the master branch is forked and a "feature" branch is created which is developed upon while keeping the master branch clean.

There a lot of good reasons why one would like to clone a specific branch and not the master branch. For example, people might want to work with an unreleased version of a project or it might be part of your (automated) testing where you need a specific branch.

In this case the git clone command is somewhat different from the example shown above. For example, if we would have a feature branch in the Open Oracle Public Cloud API library (which it has not) the command would like the one shown below;

git clone -b feature --single-branch https://github.com/louwersj/OPC_API_LIB.git

This will ensure that you get the feature branch of the Open Oracle Public Cloud API library and not the master branch which is the default branch that will be downloaded when invoking the git clone command. 

Friday, February 17, 2017

Oracle Cloud - Integrate Oracle Developer Cloud Service with Slack

Slack is a cloud-based team collaboration tool founded by Stewart Butterfield. Slack finds a growing popularity with development, maintenance and DevOps teams due to the fact that it is easy to integrate with all kinds of tooling via the simple webhook methods provided by the slack team. This gives the power to develop simple applications that will send messages to a Slack channel where also humans are discussing daily business.

As an example, Oracle provides a standard integration from the Oracle Developer Cloud into the slack webhooks. This provides the option to push messages to the Slack channel from your DevOps team which contain information about, for example, builds, deployments, Git pushes, Merge Requests and others.

Having the option to integrate the Oracle Developer Cloud Service with Slack provides a great opportunity to engage your DevOps team in an always-on manner. As members will be able to see on the Slack website and on the Slack App on their mobile phones what is happening and directly discuss between each other it makes live much more easy and work much more interactive.

Create a slack webhook
To be able to make use of the integration functionality in the Oracle Developer Cloud Service towards Slack you will have to create a webhook in Slack. For this you will have to go to the Slack website and under "Channel Settings" select "Add an app or integration".


This will bring you to the app store from slack where you can search for the "Incoming WebHooks". Incoming Webhooks are a simple way to post messages from external sources into Slack. They make use of normal HTTP requests with a JSON payload, which includes the message and a few other optional details described later.

Selecting this will crate a Webhook and allow you to setup and configure your Webhook. The most important part of the setup is the Webhook URL which you will need in the Oracle Developer Cloud Service to setup the integration with Slack. A large number of other settings can be done in the Webhook configuration on the slack site.


In effect this is all that needs to be done to create the Slack webhook.

Configure Slack in the Oracle Cloud
The next step of the integration is going to your project page in the Oracle Developer Cloud Service and navigate to the Administrator section and select Webhooks. Here you will have the option to create new webhooks (from the Oracle side). When creating a new Webhook you will have the option to select Slack as a type which will show you the below set of options


As you can see you can subscribe to a number of things. For this example we are interested only in a number of specific events. To be precise, we want to see a message on our Slack channel for all Git push events and all Merge Requests on git in the Oracle Developer Cloud Service.


In effect this is all the configuration that is needed to ensure you have integration between the Oracle Developer Cloud Service and Slack to ensure that your DevOps team members can use slack as an additional information channel and discussion channel.

See the result in slack
As soon as we have configured the above we can test this with a test message in the Oracle Developer Cloud Service by sending a test message to Slack.

The real test comes when we start to push new code to the Git repository. As you can see in the below image, we are now receiving the required information in the Slack channel for the entire DevOps team to ensure everyone is aware of new Git pushes and Merge Requests.


In conclusion
Ensuring your DevOps teams are able to use all the tools they need to do the day to day job is important. It is also important to remember that this day to day job nowadays also includes always and everywhere on any device. This means that your team members want to be kept up to date on what is happening on the systems and discuss this directly with each other.

Most likely they are already using Slack or Slack like communication channel on their mobile phones. Most likely they already have a Slack channel, a WhatsApp group or they communicate on Facebook Messenger. Supporting your organisation in this and providing them even more integration in a controlled manner is adding to the overall team binding and the productivity.... and the fun.