Sunday, February 19, 2017

Oracle Linux - Download code from github with the command line

Developers use Github and Git more and more and it is becoming one of the standard ways to store code in a repository. Where developers will have the need to interact with the code, write new code and refactor code other people have "just" the need to download the code and use it as part of a, for example, a deployment on a server. Downloading code from Github is working in exactly the same way as downloading code from you  Git repository you might have as a company.

When we have the need to get code from a Github repository on Oracle Linux we can use the git command. The git command is not by default installed on your Oracle Linux instance when you do a basic installation however it is available in the standard Oracle Linux YUM repository so you can install it using the yum command.

The below command will ensure that git will install without prompting the you to confirm you really want to install git

yum -y install git

Now that we have ensured we have install git we can use it to download code from github. For example, we want to have the Open Oracle Public Cloud API library which is hosted on github. In case you just want to download the master branch from the project hosted at github you have to check the main URL of the project, in our example case this URL is https://github.com/louwersj/OPC_API_LIB which means we can download (clone) the repository by making use of the URL https://github.com/louwersj/OPC_API_LIB.git which is exactly only the addition of .git to the URL. The below example shows the effect of a full git clone command

[root@a0d544 test]# git clone https://github.com/louwersj/OPC_API_LIB.git
Initialized empty Git repository in /tmp/test/OPC_API_LIB/.git/
remote: Counting objects: 68, done.
remote: Compressing objects: 100% (52/52), done.
remote: Total 68 (delta 30), reused 0 (delta 0), pack-reused 11
Unpacking objects: 100% (68/68), done.
[root@a0d544 test]# 

The above example shows how to download (clone) a repository master which should contain the latest stable release (in most cases). However, the concept of branches is used within Git and Github which provides the option to have "working versions (branches) of a project which might differentiate from the stable (master) branch.

The below image shows a simple example where the master branch is forked and a "feature" branch is created which is developed upon while keeping the master branch clean.

There a lot of good reasons why one would like to clone a specific branch and not the master branch. For example, people might want to work with an unreleased version of a project or it might be part of your (automated) testing where you need a specific branch.

In this case the git clone command is somewhat different from the example shown above. For example, if we would have a feature branch in the Open Oracle Public Cloud API library (which it has not) the command would like the one shown below;

git clone -b feature --single-branch https://github.com/louwersj/OPC_API_LIB.git

This will ensure that you get the feature branch of the Open Oracle Public Cloud API library and not the master branch which is the default branch that will be downloaded when invoking the git clone command. 

Friday, February 17, 2017

Oracle Cloud - Integrate Oracle Developer Cloud Service with Slack

Slack is a cloud-based team collaboration tool founded by Stewart Butterfield. Slack finds a growing popularity with development, maintenance and DevOps teams due to the fact that it is easy to integrate with all kinds of tooling via the simple webhook methods provided by the slack team. This gives the power to develop simple applications that will send messages to a Slack channel where also humans are discussing daily business.

As an example, Oracle provides a standard integration from the Oracle Developer Cloud into the slack webhooks. This provides the option to push messages to the Slack channel from your DevOps team which contain information about, for example, builds, deployments, Git pushes, Merge Requests and others.

Having the option to integrate the Oracle Developer Cloud Service with Slack provides a great opportunity to engage your DevOps team in an always-on manner. As members will be able to see on the Slack website and on the Slack App on their mobile phones what is happening and directly discuss between each other it makes live much more easy and work much more interactive.

Create a slack webhook
To be able to make use of the integration functionality in the Oracle Developer Cloud Service towards Slack you will have to create a webhook in Slack. For this you will have to go to the Slack website and under "Channel Settings" select "Add an app or integration".


This will bring you to the app store from slack where you can search for the "Incoming WebHooks". Incoming Webhooks are a simple way to post messages from external sources into Slack. They make use of normal HTTP requests with a JSON payload, which includes the message and a few other optional details described later.

Selecting this will crate a Webhook and allow you to setup and configure your Webhook. The most important part of the setup is the Webhook URL which you will need in the Oracle Developer Cloud Service to setup the integration with Slack. A large number of other settings can be done in the Webhook configuration on the slack site.


In effect this is all that needs to be done to create the Slack webhook.

Configure Slack in the Oracle Cloud
The next step of the integration is going to your project page in the Oracle Developer Cloud Service and navigate to the Administrator section and select Webhooks. Here you will have the option to create new webhooks (from the Oracle side). When creating a new Webhook you will have the option to select Slack as a type which will show you the below set of options


As you can see you can subscribe to a number of things. For this example we are interested only in a number of specific events. To be precise, we want to see a message on our Slack channel for all Git push events and all Merge Requests on git in the Oracle Developer Cloud Service.


In effect this is all the configuration that is needed to ensure you have integration between the Oracle Developer Cloud Service and Slack to ensure that your DevOps team members can use slack as an additional information channel and discussion channel.

See the result in slack
As soon as we have configured the above we can test this with a test message in the Oracle Developer Cloud Service by sending a test message to Slack.

The real test comes when we start to push new code to the Git repository. As you can see in the below image, we are now receiving the required information in the Slack channel for the entire DevOps team to ensure everyone is aware of new Git pushes and Merge Requests.


In conclusion
Ensuring your DevOps teams are able to use all the tools they need to do the day to day job is important. It is also important to remember that this day to day job nowadays also includes always and everywhere on any device. This means that your team members want to be kept up to date on what is happening on the systems and discuss this directly with each other.

Most likely they are already using Slack or Slack like communication channel on their mobile phones. Most likely they already have a Slack channel, a WhatsApp group or they communicate on Facebook Messenger. Supporting your organisation in this and providing them even more integration in a controlled manner is adding to the overall team binding and the productivity.... and the fun. 

Oracle Cloud - Microsoft Visual Studio Code & Oracle Developer Cloud Service

Oracle Developer Cloud Service is a SaaS based solution for developers to make use of a fully integrated development engine in the Oracle Cloud. The Oracle Developer Cloud Service ties into a multitude of other cloud services in the Oracle Cloud. One of the central pieces for most developers is a source repository, within the Oracle Developer Cloud Service this is Git (for all good reasons)

Even though Microsoft might not be the first vendor to think about when talking about the Oracle Developer Cloud Service it actually has a great integration with Git and by having this with the Oracle Developer Cloud Service. If we take Microsoft Visual Studio Code as an example used by developers we can ensure this is directly connected with the Git repository within the Oracle Developer Cloud Service

When using Microsoft Visual Studio Code for the first time in combination with Oracle Developer Cloud Service you most likely need to install the Git client. As you can see you will get a message stating this in the below screenshot and directing you to the git-scm.com website where you can download the required client software to be able to interact with a Git repository.


Installing the Git client
As you encounter this message the first thing you will have to do is install the Git Client before you can continue. After you downloaded the windows Git Client installer it will take you through some standard steps of installing windows software.

Step 1
Agree with the license agreement

Step 2
Select a location to install the Git client

Step 3
Select the options you want the installer to install and the configuration you want it to make to your system

Step 4
Select a name for the shortcut to be used after the installation

Step 5
Select the way you want to use Git from the command line. As I am using Linux for the most part when developing and windows only on occasion I am OK with the windows command prompt only however this is a personal preference.

Step 6
When connecting to Git via SSH you will need a SSH client. Git comes with a SSH client however, to ensure integration with other SSH tools and Tortoise which you might already have running on your Windows system you can select to not use the SSH client that is shipped with Git and select another one that is more integrated in your day to day work already.

Step 7
Everyone who has been working cross platform between windows and UNUX / linux systems know why the below question is asked. To ensure you do not have to go through a hell of dos2unix and unix2dos commands to make what you just developed usable you have to select the right option for your situation in this step.

Step 8
Select your terminal emulator. I have the preference for MinTTY. In case you are developing code that is perfectly fine working with the windows console you can also select the second option however I would advise in most cases to use the first option.

Step 9
Configure the extra options as shown below. This is valid for most situations and should provide you the best result


Step 10
In case you feel experimental you can select this option. It will give you built in difftool to find diffs in between versions. However, as the screen states,….. it is not that well tested.

Step 11
Click the install button and wait for a short period of time.

Checking Git installation in Microsoft Visual Studio Code
After you have ensured that the Git client is installed you have to restart Microsoft Visual Studio Code and open a folder (file -> Open Folder). You will be able to see that you opened a folder as you now have the “explorer” section opened for you and it shows the folder you selected.


If we now click the Git button we notice that we get a different result and we have the option initialize a Git repository, as shown below;


Connecting to Oracle Developer Cloud
The initial connection to a project hosted on the Oracle Developer Cloud Service is however more easily done via the windows file explorer. If we go to a location where we want to have our project code from the Oracle Developer cloud we can use a left mouse click to open the option menu and we will notice a “Git GUI Here” option which, when clicked will result in a menu with 3 options (or more if you already have created some projects) The options presented are:

  • Create New Repository
  • Clone Existing Repository
  • Open Existing Repository


In effect we always start a project on the Oracle Developer Cloud Service so we do not want to create a new repository, we want to clone an existing repository and start working on it. When you select the “Clone Existing Repository” you will be presented with the below screen.


In this screen we have to enter the “source location” and the “target directory”. The target directory will be the directory on your local workstation you want to use to host a local working copy of the repository. The source location should be the http location of the Git repository in your Oracle Developer Cloud Service project. You can find the url needed when you navigate to the code section in the Oracle Developer Cloud Service as shown below which you have to copy and paste in the “source location” field of the Git GUI on your local workstation.


As soon as you click the clone button the process of cloning (downloading) the project to your local machine will start. In case this is the first time you will have to authenticate yourself using the username and password you use to access the Oracle Developer Cloud.

If you look into the folder that is created you will notice that all code (in our case only the README.md file) is now present locally.

Using it in Microsoft Visual Studio Code
Now that we have established a local copy of the Git repository on our local workstation from the Oracle Developer Cloud Service we can also start using it in Microsoft Visual Studio Code. If we open Microsoft Visual Studio Code and open the folder that was created in the previous step we will see that we do not have to create a repository. It is already picking up the information from the Git client that this is under Git.


Now, if we add a file in the directory by creating a new file in Microsoft Visual Studio Code you will notice that this is detected and shown as a blue 1 icon on the Git button, indicating we have one changes that is not committed to Git.


If we go to the Git screen we can add a comment and “save” this. However, it is good to remember that this will not ensure that you actually commit the change to the Oracle Developer Cloud Service Git repository


If we want to ensure that the file is also actually pushed to the Git repository on the Oracle Developer Cloud Service we have to use the “Push to” option which will make sure your change is send to the Oracle Developer Cloud Service.

In conclusion
In effect every developer tool, including Microsoft Visual Studio Code has the ability to work with the Oracle Developer Cloud Service. And in cases where your developer tool is not supporting natively Git you can always use the Git GUI to ensure you have this integration. 

Thursday, February 16, 2017

Oracle Linux – forked logging to prevent script execution delay in high speed environments

Logging has always been important when developing code, with the changing way companies operate their IT footprint and move to a more DevOps logging becomes even more important. In a more DevOps oriented IT footprint logging is used to constantly monitor the behavior of the running services in the company and feed the information back to the DevOps team to take action on issues as well as to continuously improve the code.

As DevOps teams tend to consume much more logging than traditional oriented IT organizations the direct effect is that a lot more logging routines are developed and included in the code. While traditionally the number of lines that are written to log files are relatively limited the number of lines written to log files in a DevOps footprint can grow exponentially.

Even though this might not look like a challenge in the first instance it might cause an issue in environments with a lot of services deployed in combination with a high level of logging “steps” and a high level of execution. When developing code and when writing logging where you write to a log file you have to realize that every time you do send a line of logging to the logfile it actually takes cpu cycles, memory interaction, I/O operations to the file system as well as potentially waiting for lock on the file to be able to write to it. All this is resulting in a delay during the execution of your code.

Traditional logging implementation
In “traditional” code and especially in scripting like for example is done in bash scripting, you will see a lot of implementations like shown below. Between start and finish we have two “real” functions that will do something, after the execution of the function in the script flow the script will write some logging to a file.  In first instance this is a perfectly working solution, however, if you execute the script hundreds of times per minute the end 2 end execution of the script might slow down due to the fact that the log writing is developed in an inline manner.

Forked logging 
When developing code, even if you develop a small script in bash, it is good practice to ensure that your main code flow will not have to wait for the lines of logging to be written to file on the file system. In the example below you will see an implementation where the main flow of your code will for a secondary process which will take care of writing the logging to the logfile. 


By using this way of developing your code will execute the two functions that represent an actual step as one process on the operating system in a sequential manner. Every time there is the need to write logging to the logfile a new process is forked. The benefit of this is that your main flow of code will not wait until the “write to logfile” process is completed, it will directly go to the next step in the script.

In cases where you have congestion in writing to the file this implementation will ensure your main process will not have any delay in execution. The forked process that will take care writing to the file will experience the delay however main execution times will improve.

Code example of forked bash processes
If we take the below code as a starting point of the example. We have three main steps in the code, all three basically do the same. In a real world situation step 1 and step 3 would be “real” code execution while step 2 is writing to the log. The sleep commands are included to showcase delay of the system.

#!/bin/bash

sleep 1
echo "step 1 : $(date)" >> ./result.txt

sleep 5
echo "step 2 : $(date)" >> ./result.txt

sleep 1
echo "step 3 : $(date)" >> ./result.txt

As you can see step 1 and step 3 take 1 second to complete while step 2 (writing to the log file) takes 5 seconds. If we run the script and read the content of ./result.txt we will see the following:

[opc@a0d544 test]$
[opc@a0d544 test]$ ./0_example.sh
[opc@a0d544 test]$ cat result.txt
step 1 : Thu Feb 16 17:19:49 EST 2017
step 2 : Thu Feb 16 17:19:54 EST 2017
step 3 : Thu Feb 16 17:19:55 EST 2017 
[opc@a0d544 test]$

The above is perfectly explainable and as expected. However, we want to ensure that the code runs faster, or in other words, we want the main sequential flow to finish faster while we do not really worry that the logging lags a bit behind. In case that the logging is experiencing issues with I/O performance you do not want to wait for this. Also, if your logging would be done by using a curl command to a central REST based logging server you do not want to wait for that.

In the below example we show a form of forking in bash, there are other ways of doing it, however this works for example purposes. The code is shown below:

#!/bin/bash

function step2 () {
 sleep 5
 echo "step 2 : $(date)" >> ./result.txt

}

sleep 1
echo "step 1 : $(date)" >> ./result.txt

( step2 ) &

sleep 1
echo "step 3 : $(date)" >> ./result.txt

As you can see, we have placed step two in a function and we call the function in a bit of a different manner than we would normally do. This has the result that the function is executed in a sub-shell and the main sequential code flow will continue without waiting for the sub-shell to finish the execution of the function.

If we execute this and look at the results we see a somewhat different result than in the first test. We see that the first line is for step one at :49 and the second line is for step three at :08. This is also the moment the script finished. However, at :13 we have step two reporting to the file while the initial script already finished.

[opc@a0d544 test]$ ./1_example.sh
[opc@a0d544 test]$ cat result.txt
step 1 : Thu Feb 16 17:20:08 EST 2017
step 3 : Thu Feb 16 17:20:09 EST 2017
step 2 : Thu Feb 16 17:20:13 EST 2017
[opc@a0d544 test]$

By implementing the calling of step two in a different manner we ensured that the execution of the main sequential flow improved with 5 seconds. Even though you might not expect this kind of delay there are improvements that can be made to the script execution speed when adopting solutions like this.

Some considerations
In cases where time stamping of your logs is extremely critical and/or in cases where a expect congestion in writing to files it is good practice to ensure that the timestamp is taken on the moment that the process is forked and not the moment the line is written to the file. More precisely, the main flow should take the timestamp and hand this over to the forked process in combination with the information that needs to be written to the file.

This way you are sure that the timestamp of the actual occurrence is written to the log and the timestamp when the information was written to the logfile. 

Wednesday, February 15, 2017

Oracle Linux – Working with Memory Mapped Files

When working with Oracle Linux and developing your own solutions which make a more direct use of the underlying operating system you will, at one point in time, encounter the need to have multiple processes interact with the same file. As an example, you might have a processes that writes actions to an action-queue file while another process is reading this file and updates the file when the action is completed.

When you encounter such a situation you can work around this by fseek() and code via this method however there are more elegant ways of doing it. You could map the file to memory and use a pointer to the memory map to interact with it.

Using mmap()
To do so you can use the map maker mmap() to map a file to memory.  mmap() creates a new mapping in the virtual address space of the calling process. The mmap() system call can be called as shown below:

void *mmap(void *addr, size_t len, int prot, int flags, int fildes, off_t off);

addr : This is the address we want the file mapped into. The best way to use this is to set it to (caddr_t)0 and let the OS choose it for you. If you tell it to use an address the OS doesn't like (for instance, if it's not a multiple of the virtual memory page size), it'll give you an error.

len : This parameter is the length of the data we want to map into memory. This can be any length you want. (Aside: if len not a multiple of the virtual memory page size, you will get a blocksize that is rounded up to that size. The extra bytes will be 0, and any changes you make to them will not modify the file.)

prot : The "protection" argument allows you to specify what kind of access this process has to the memory mapped region. This can be a bitwise-ORd mixture of the following values: PROT_READ, PROT_WRITE, and PROT_EXEC, for read, write, and execute permissions, respectively. The value specified here must be equivalent to the mode specified in the open() system call that is used to get the file descriptor.

flags : There are just miscellaneous flags that can be set for the system call. You'll want to set it to MAP_SHARED if you're planning to share your changes to the file with other processes, or MAP_PRIVATE otherwise. If you set it to the latter, your process will get a copy of the mapped region, so any changes you make to it will not be reflected in the original file—thus, other processes will not be able to see them. We won't talk about MAP_PRIVATE here at all, since it doesn't have much to do with IPC.

fildes : This is where you put that file descriptor you opened earlier.

off : This is the offset in the file that you want to start mapping from. A restriction: this must be a multiple of the virtual memory page size. This page size can be obtained with a call to getpagesize().

Example code
As an example you can review the below code example. Here you see we need to include sys/mman.h explicitly. In the example we use the file  “somefile”

#include <unistd.h>
#include <sys/types.h>
#include <sys/mman.h>


int filedesc, pagesize;
char *data;

filedesc = open("somefile", O_RDONLY);
pagesize = getpagesize();
data = mmap((caddr_t)0, pagesize, PROT_READ, MAP_SHARED, filedesc,pagesize);

By making use of such as an approach you will have much more control over files and will much more control and ease of development in cases where you will need to interact with a file from multiple processes at the same time.

For users who use Oracle Linux as pure the operating system and do not develop custom code or only code on a higher level this might not directly be of interest. However, everyone who is building custom code on Oracle Linux that needs to interact more directly with the operating system using this approach can be very beneficial in some cases.

Oracle Cloud - Capgemini Experience Test Drive

In collaboration with Oracle a team of Capgemini and Oracle cloud leaders have been hosting customers and Capgemini employees to enjoy a full evening of experiencing the Oracle Cloud.

Sessions on subjects such as Mobile & Chatbots, Oracle Process Cloud, Oracle Integration cloud and Oracle Compute cloud have provided all attendees with a large set of experience on the Oracle Cloud. To get a short impression of the event, the youtube video below will give an impression;


The hands-on experience workshops have been provided by the following people;

Monday, February 13, 2017

Oracle Cloud - introducing the Open Oracle Public Cloud API library

Today we will be introducing the Open Oracle Public Cloud API library. The Open Oracle Public Cloud API library or OPC_API_LIB is an open source library of functions written to make it more easy for developers to code against the Oracle Public Cloud API's and ensure integration. The open source project is licensed by the GNU General Public License and is available free for everyone. The project is not an Oracle project, it is a community project.

To support developers, DevOps teams, continuous delivery teams, system incinerators and everyone with the need to interact with the Oracle Public Cloud in a programmatic manner can make use of this API library.

The main intend is to take a away the burden of fully dive into the details of the inner workings of the Oracle Public Cloud API's and provide an abstraction layer in the form of the library which can be used to code against.

The current release is a extreme small subset of functions and the intention is to grow the number of functions in the library in the upcoming time. As main language currently bash has been selected as the primary language for the library.

All the main code for the Open Oracle Public Cloud API library will be available on github.

Friday, February 10, 2017

Oracle Linux - install Jenkins on Oracle Linux

Jenkins in becoming more and more the tool of choice in most continuos integration and DevOps environments.

Jenkins is an open source automation server written in Java. Jenkins helps to automate the non-human part of the whole software development process, with now common things like continuous integration, but by further empowering teams to implement the technical part of a Continuous Delivery.

It is a server-based system running in a servlet container such as Apache Tomcat. It supports SCM tools including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, Clearcase and RTC, and can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands. The creator of Jenkins is Kohsuke Kawaguchi. Released under the MIT License, Jenkins is free software.

Installing Jenkins on Oracle Linux is relative easy and only includes a small number of steps as outlined below;

First step is to ensure you have the Jenkins YUM repository available on your Oracle Linux instance so you can do the installation. This includes the following 3 steps

sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins

As soon as you have completed those steps you are able to issue a simple yum install to install Jenkins on your Oracle Linux instance

yum install java

When the installation is done you will have to ensure that Jenkins is started and that it will start every time you reboot your system. The following two commands will make sure that Jenkins is started and that it is included in the startup routine of your Oracle Linux instance

service jenkins start/stop/restart
chkconfig jenkins on

If all completed without any issues you should now have a running Jenkins server on your Oracle Linux instance. This means that you should be able to access the server with a browser on port 8080. That is, if you have ensured that your local firewall will allow so if you have one installed.

If you have ensured you can access Jenkins on port 8080 you will see the below screen the first time you access it.


This means that your Jenkins server is running and you have to ensure you follow the instructions on the screen to unlock Jenkins. This is to ensure that you are the only one that can make the initial setup steps.

you should now be ready to start enjoying Jenkins on Oracle Linux.

Monday, February 06, 2017

Oracle Cloud - Adding swap space to your Oracle Linux compute instance

Whenever you deploy an Oracle Linux instance on the Oracle Compute Cloud at this moment you will notice that the deployment is bare minimal. In essence I do agree with the line of thinking that things you do not explicitly need should not belong on your system. Everything you need for a specific reason you are free to add at that moment in time while keeping the template as small as possible.

The same applies for the fact that systems should be sized for what they really need and one should not oversize the systems. For this reason I am a personal fan of just enough operating system (JEOS) kind of deployments and just enough hardware resources.

One of the downsides is that if you use this line of thinking and use a bare minimal operating system deployments with a limit set of compute resources you sometimes run into the issue that you miss things you actually would like to have. One of the things that you might run in at first when using this line of reasoning on the Oracle Public Cloud is that of swap space.

No swap space
When deploying a templated bare minimal system in the Oracle Public Cloud using the Compute Cloud Service you will notice that you do not have swap space. Depending on the goal you have for a specific instance this might be an issue or you might not even notice. Some applications are perfectly fine not having swap space while some even demand it during installation.

By default you willl not have swap space, this means you will have to add swap space at run time or you have to make sure that your automated deployment will take care of adding swap space for those instances where it is required.

Give me swap space
In cases where you need swap space you can simply add swap space. In effect there are two main ways of adding swap space. You can use a swap file you can create with the dd command or you can add a entire disk to your machine and use that for swap space. It used to be that case that adding additional disks to your machine was a task that was not as simple as only executing commands and it would involve actual hardware.

In the era of virtualization and cloud, and in all reality since we started using SAN solutions for storage, adding more diskspace to a machine is not that hard anymore. claiming and adding more disk space in the cloud era is simply requesting more space from your cloud provider.

Creating a disk in the Oracle Cloud
When we decide to use a disk to use for swap space the first thing we need to do is to ensure we have a disk to add. To create a disk we navigate to the storage tab in the compute cloud service console. Here we can create a new disk as a storage volume, an example of this is shown below


After the disk is created you can attached the disk to an instance in the Oracle Compute Cloud Service. This will result in a screen like the one below. You have to select the instance name from a list of values and you have to select as which number you add the disk to the instance.


Selecting the number to which you add the disk is important, it will result in the device name under which the new disk is known in the Oracle Linux instance. By default the first disk will be known device /dev/xvdb, the second device will be /dev/xvdc, the third will be /dev/xvdd etc.

Creating the swap space
As soon as you have attached the disk to the instance it will be known as a new device on the instance. This means you will have to tell the instance how you like to use it. You could use it for storage which would require you to mount it as a filesystem. However, in this case we like to use it as swap space which will require a bit of a different approach than using it as a filesystem

First we check the current amount of swap space that is available on the instance at this moment. As can be seen below we do currently do not have any swap space added.

[root@pocapp2 ~]#
[root@pocapp2 ~]# free
             total       used       free     shared    buffers     cached
Mem:       7657252    5632492    2024760          0      72088    5244236
-/+ buffers/cache:     316168    7341084
Swap:            0          0          0
[root@pocapp2 ~]#

Now we have to see where the disk is we just created and added to the instance. As we have selected that the disk should be added as the secondary disk we now should be able to find a new disk as device /dev/xvdc

[root@pocapp2 ~]#
[root@pocapp2 ~]# ls /dev/xvd*
/dev/xvdb  /dev/xvdb1  /dev/xvdb2  /dev/xvdc
[root@pocapp2 ~]#

As you can see from the above example we now have a device /dev/xvdc available on the system which we can use for swap space. Now we have to make the device a swap device by using mkswap

[root@pocapp2 ~]#
[root@pocapp2 ~]# mkswap /dev/xvdc
mkswap: /dev/xvdc: warning: don't erase bootbits sectors
        on whole disk. Use -f to force.
Setting up swapspace version 1, size = 10485756 KiB
no label, UUID=8ac3eacf-42e6-43a4-8a53-d33f29767dee
[root@pocapp2 ~]#

Now we have ensured that we can use the new disk as swap space we have to enable swap in the system by making use of this new swap device. A simple swapon command on the device will ensure that the swap is used.

[root@pocapp2 ~]#
[root@pocapp2 ~]# swapon /dev/xvdc
[root@pocapp2 ~]#

After executing the swapon command the device should now be acting as a device to provide swaps space to the system. You can check so by again executing the free command and you will notice that additional swap space is now active.

[root@pocapp2 ~]#
[root@pocapp2 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          7477       5508       1969          0         70       5121
-/+ buffers/cache:        316       7160
Swap:        10239          0      10239
[root@pocapp2 ~]#

Even though we now have the swap space available on the system we have not made it persistent. Meaning, next time we reboot the machine we will lose the swap space again. To ensure the swap space is persistent we have to add a line to /etc/fstab like the one shown as an example below.

/dev/xvdc               swap                    swap    defaults        0 0

Now we have ensured that our system is equipped with additional swap space and that it is done so in a persistent manner to ensure that swap space is available every time we reboot the machine.

Sunday, February 05, 2017

Oracle Cloud - Build secure hybrid cloud connections with Oracle Corente Gateway

When you start using the Oracle Cloud one of the things you most likely would like to understand is how you will connect users to systems deployed in the Oracle Cloud and how you might connect servers in your own datacenter or in another cloud to this. For some time the primary answer would be, using Oracle Fast Connect.

However, another solution is provided and finds its origin in this press release dating back to the beginning of 2014;

On January 7, 2014, Oracle announced that it has agreed to acquire Corente, a leading provider of software-defined networking (SDN) technology for wide area networks (WAN).

The transaction has closed.

Corente's software-defined WAN virtualization platform accelerates deployment of distributed and cloud-based applications and services by allowing customers to provision and manage global private networks connecting to any site, over any IP network, in a secure, centralized, and simple manner. Proven deployments at leading enterprises and cloud service providers have dramatically decreased time to deployment of cloud-based applications and services, and increased security and manageability across the enterprise ecosystem.

The combination of Oracle and Corente is expected to deliver software-defined networking offerings that create cost-effective, secure networks, spanning global deployments, delivering a complete technology portfolio for cloud deployments with SDN offerings that virtualize both the enterprise data center LAN and the WAN.

Oracle Cloud acquisition strategy
As it often go’s with Oracle and acquisitions, for some time you do not hear from the acquired product and suddenly it starts to be included in the wider portfolio. Ever since Oracle started the journey to the cloud you see that often companies are acquired to strengthen the service portfolio of the Oracle Public Cloud in some way or form. 

In some cases this is not a full new product line, it are the small additions that make the Oracle Public Cloud much more attractive and more easy to use and incorporate in your enterprise deployments. 

Connecting the Hybrid Cloud
The Oracle Corente Gateway provides a solution to a known problem when developing a hybrid cloud strategy. The issue resolves around the question; how do we connect the different clouds and locations? By default, cloud solution open up to the public internet, a model which you do not want in all situations. The recent issues with compromised MongoDB servers that have been configured to be accessible from the public interne made this painfully clear once again. 

The ideal model you want to see is that nothing is connected to the public internet directly unless there is a functional reason for. Meaning, webservers providing services to users on the public internet can very well be exposed to the public internet. However, all other services running on those specific servers and all other servers should be shielded from people trying to access them. 

Ideally a model is created where the different clouds, cloud locations and traditional datacenter locations are connected together via de secured network. This secured network can be a site-2-site VPN tunnel based network over public internet or this can be a secured network via the dark fiber backbone of the major network providers. The last is for example a service provided by Equinix in the form of the Equinix Cloud Exchange. 

Oracle Corente Services Gateway
Oracle provides an easy to use and easy to implement solution for a site-2-site VPN model in the form the Corente service. The Corente service can be seen as a virtual VPN end-point which you can connect to an on premise solution in your datacenter. As an example, you would be able to create a secure site-2-site VPN connection where you have Corente running in the Oracle Cloud and in your local datacenter you have a Juniper vSRX solution in place. 



By binding both the cloud and your local datacenter together by using a VPN site-2-site connection you can extend your datacenter to the private cloud. By ensuring the correct network routing services can be shared and administration can be done with a single network experience. This limits the needs to have direct and open connections between the two sites. The level of integration and the level of security is raised by binding the two location together. 

As can be seen in the diagram above, the Corente instance is provisioned in the Oracle Cloud. For this an Oracle Compute Service Instance is used which will run Oracle Linux to ensure the software defined VPN endpoint provides the needed services. From the Corente gateway you can route network traffic to the Oracle Compute Service Instances. However, also connections to other Oracle Public Cloud Services can be established. As an example, you can use this model to also ensure the connections to the Oracle Databases running in the Oracle Database Cloud Service. 


Oracle Cloud - Deploying Microservice Containers

Whenever you engage on a more microservice oriented way of developing your application it will become clear that this is an architecture that is more suitable to be developed in a DevOps manner than in the traditional way applications are developed and maintained.

One of the things that we see a lot in enterprises that start transforming from traditional IT to a more modern way of architecture, development and maintenance is the adoption of extreme fast and flexible building blocks. Those building blocks are often selected to provide the optimal support for a DevOps king op operation and ensuring a high flexibility for agile development as well as for scaling up and down compute resources.

One of the technologies we see gaining adoption in the enterprise in a rapid fashion is the use of containers in combination of microservies. Instead of provisioning "virtual machines" in a cloud environment to host a monolithic application the trend is changing to deploying containers to host webservices.

An example of such a deployment you can think of running Flask in a Docker container. Flask is a microframework for Python based on Werkzeug and Jinja 2 under the BSD license and it is ideal for developing microservices with Python. Dockerize a simple flask app is relative easy and a large number of tutorials and examples are available.

Deploying with Oracle Cloud
When you are building an enterprise class microservice architecture based footprint and start adopting a container based infrastructure in a DevOps fashion the Oracle Cloud is providing some ideal components to get started.

Oracle Cloud for docker based deployments

As you can see from the above high level representation Oracle provides some of the key components for building such a landscape. In this example your developers will make use of the Oracle Developer Cloud Service to develop and store code. However, this is also the foundation for an automated deployment of containers which will contain both the needed technical components such as Flask and the developed microservice.

In the above example you can see that "application consumers" have a "person" as icon. In reality the consumers of a microservice will be in most cases applications instead of real life persons. While stating applications, this can be real life applications or this can be also another set of microservices.

The Oracle Developer Cloud Service provides the basic components to facilitate a fully automated continues integration strategy. In case you desire more than what is provided out of the box you can deploy and integrate whatever you need by leveraging the Oracle Compute Cloud Service.

Integrate with other services
Even though you can in theory build everything based upon a container strategy. The question architects have to ask is, which parts do I want to develop and which parts do I want to consume. In many cases it is much more beneficial to consume a service rather than develop a service. For example messaging, you can build a messaging service yourself or you can make use of the Oracle Messaging Cloud Service instead and consume this rather than develop it.

The same is applicable for example for handling documents or storing data in a database. For this you could leverage some of the other Oracle Public Cloud Services.

In conclusion
When transforming your legacy applications or building a new solution it is advisable to look into how you can leverage more modern architecture principles such as microservices. Also advisable is to ensure you ensure you can leverage the flexibility and scalability of the cloud and adopt lightweight solutions such a containers.

In addition to the above, you should take into consideration how you can create your solution with DevOps and continuous integration in mind to ensure an agile development method which provides flexibility and speed for adopting new strategies. 

Saturday, February 04, 2017

Functional Decomposition for Microservices Architecture and Application Refactoring

When you are starting to consider building a new product, a new application or refactoring and retrofitting an existing application to make it future proof at one point in time you will most likely consider the use of microservices.

Microservices is a specialization of an implementation approach for service-oriented architectures used to build flexible, independently deployable software systems. Services in a microservice architecture are processes that communicate with each other over a network in order to fulfill a goal. These services use technology-agnostic protocols. The microservices approach is a first realization of SOA that followed the introduction of DevOps and is becoming more popular for building continuously deployed systems.

In a microservices architecture, services should have a small granularity and the protocols should be lightweight. A central microservices property that appears in multiple definitions is that services should be independently deployable. The benefit of distributing different responsibilities of the system into different smaller services is that it enhances the cohesion and decreases the coupling. This makes it easier to change and add functions and qualities to the system at any time. It also allows the architecture of an individual service to emerge through continuous refactoring, and hence reduces the need for a big up-front design and allows for releasing software early and continuously.

Knowing your functionality
Regardless of the fact that you are building a new application from scratch or that you are intending to build upon an existing application and will be retrofitting for the future you will need to understand its functionality. In traditional architecture methods it is vitally important to understand the functionality of your application, however, when moving to a Microservices Architecture it is even more important.

The primary reason why it is important to understand the application functionality and the business use of the application is driven by the fact that it is good practice to break application up into Microservices based upon the functional components.

Traditionally all functionality was captured in an overall monolithic application architecture. With Microservices each functionality can be in theory a separate Microservice.

Functional decomposition
Whenever starting to architect an application which will be based upon Microservices you will have to make a functional decomposition to break down the complexity and map this on functional areas and functional components and later to services and Microservices.

Functional decomposition is the process of taking a complex process and breaking it down into its smaller, simpler, parts.  This might result in the below high level functional decomposition which is represented in a flow.


In a real world application this will be a sub-decomposition of a much larger and much more complex set of processes. The main pitfall is that this is primarily done by developers, who tend to think in solutions. The best way to do a pure functional decomposition is doing it without thinking about how this should be implemented in code (or a system for that matter). This is best done when thinking about the process how it would be done with pen and paper.

After you have defined the functional steps you can break down the steps into the functions you need per step. In this model you can define which technical functions you need to be able to complete a step. As an example, if one of your steps is checking the transportation costs for sending a parcel to a customers shipping location you might need the following technical functions:

  • Get combined weight for all products in the order
  • Determine shipping box on product sizes
  • Get shipping destination 
  • Get shipping costs based upon weight, box dimensions and destination
  • Get customer discount
  • Apply customer discount on shipping costs

This will provide you a mapping like shown below as a visual example representation. In reality you will provide the actual function description.



Mapping functional decomposition to services
When you have been able to create a decomposition of your application in both functional steps as well as the functions needed to support the functional step you will be able to start mapping the overlap of technical functions over all the functional steps in your flow. If we take the example of the steps needed to determine the costs for sending a parcel you have the required technical function "get customer discount". This technical function will be required in multiple steps in an order intake flow.

The below representation shows the mapping of the functional decomposition and finding the double functions in the overall flow.


If you have been able to find the "double"functions you can map them to future microservices. The idea of a microservice is that you can call it from every functional step where it is needed. This holds that you can have it in a central location in your microservices deployment and call it from the functional step when needed.




Sizing your microservice deployment
In essence microservices are, due to the way microservices communicate, ideal to make a high available and highly scalable architecture. As a rule of thumb it is considered a good practice to always deploy a microservice in a high available mode and always have a minimal deployment of two instances of a microservice in your deployment.


Based upon the number of functional steps that use a specific microservice you can take a first guess of the number of instances you need for the microservices. As stated, a default rule every microservice needs to be deployed at least in a dual deployment fashion. If you see that certain microservices are used by multiple functional steps it might be wise to deploy more than only two instances for this specific microservice.

In conclusion
In every case, regardless of the fact you will use microservices, it is a vital step in the thinking process to brake down your your application in functional parts. When developing a microservices based solution the functional decomposition can be used to start mapping the different microservices you will need and align them with the functional steps in your application.

Thursday, January 19, 2017

Oracle Cloud - Prevent the MongoDB hack effect in Oracle Cloud

One of the things that cloud computing in general has provided the world is the easy way of starting a project. Without too much effort people can deploy compute instances, databases and components needed to start a project. This in essence is a good thing and it fuels innovation and ensures projects can be started and completed with a lot less effort than in traditional IT environments.

The downside of it is that people who do not oversee security implications are able to deploy new environments without being forced to thoroughly implement security. The recent MongoDB debacle in which random MongoDB databases have been compromised and lost or ransom demands have been made by the criminals who downloaded the data and removed it from the server after that.

In this case MongoDB is in effect not to blame, most of the MonogDB servers that have been compromised are in effect not hacked. The servers have been exposed to the public internet and have not been secured by a password or with a default / very weak password. This in effect left the door open to attackers to gain access and download the data.

This means that not a software bug has caused the avalanche of hacked MongoDB servers, the way people implement solutions in the cloud and do not think about security caused this issue. By enabling people to deploy systems with a click of a button is on one side a blessing, on the other side it is causing a major risk as people can easily forget to implement the needed levels of security.

Consider the traditional security rules
When deploying systems and entire IT footprints on the cloud, in this example the Oracle Public Cloud, you still need to apply a number of the traditional IT security rules in place. The main reason why you need to apply them is simple, they make sense and they have been created for a reason. The technical implementation of them might differ however the theoretical model applies both for cloud based solutions and traditional based solutions.

Network zones
In traditional IT environments the concept of multiple zones or tiers has been well established. The below model used in many cases when deploying Oracle centric IT footprints consists out of four network zones or tiers.



Un-trusted tier / zone : un-trusted zone can hold systems that connect to “unknown: parties in an uncontrolled area. As an example, the un-trusted zone can hold systems that are connected to the public internet. The un-trusted zone cannot hold data and can only hold stateless systems. Systems in the un-trusted zone can connected (in a controlled manner) to the systems in the semi-trusted zone directly.

Semi-trusted tier / zone: the semi-trusted zone can hold systems that can connect to “unknown” parties in a controlled area. As an example, the semi-trusted zone can hold systems that connect to a customer network or a third party network. The semi-trusted zone cannot hold0data and can only hold stateless systems. Systems in the semi-trusted zone can connect (in a controlled manner) to systems in the trusted zone directly.

Trusted tier / zone : The trusted zone can hold systems that connect to the semi-trusted zone and is in general for hosting databases and data-storage applications. As an example, the trusted zone can hold a database which provide support to applications in the semi trusted zone. Systems in the trusted zone can connect (n a controlled manner) to systems in the fully trusted zone directly.

Fully trusted tier / zone: The fully trusted zone holds generic systems that are sued for management, support and control. As an example; Oracle Enterprise Manager will be hosted in the trusted zone.

Using this model, or a model like this, provides a clear segregation from a network point of view. By implementing such model you will prevent that a malicious attacker can easily gain access to systems with a higher level of confidentiality or impact. Implementing this solution in the Oracle Public Cloud requires understanding how to manage the Oracle Public Cloud firewall configurations and implement them in a manner that you can achieve the same as you would like to achieve in a standard IT footprint.

Network segments
Where network zones can be seen as the vertical split of where you deploy servers in a network the network segments can be seen as the horizontal split.  Commonly network segments are used to segregate production, acceptance, test and development systems in segregated stacks of network tiers /zones.

This could mean that you will have a production segment which has an Un-trusted tier, Semi-trusted, Trusted tier and a fully trusted tier. And you would have the same tiers in the acceptance segment, the test segment and the development segment.

One of the main traditional reasons network segments are for splitting this is that in a development environment people will have (and need) a lot more freedom on system in comparison to the production segment. The production segment will be tightly controlled and monitored while the development environment people are allowed (and expected) to experiment and find new ways of doing things.

Local firewall rules
A highly debated subject is the local firewall and the need to have them implemented on top of network segmentation and tiering.

In cases where local firewall rules are NOT implemented you will be able to create connections on all ports towards all servers in the specific network tier and network segment the server resides. This makes that if someone gains access to a server it is relatively easier to also acquire access to servers in this same segment and tier as no firewall is preventing you from creating a connection to the other servers.

In cases where local firewall rules are implemented the effect of a compromised server in your network is somewhat more contained. All other servers in the same network segment / tier will only allow incoming connections from the compromised server on ports they explicitly allow.

As stated this is a highly debated subject as a large number of administrators see it as a burden to ensure the local firewall rules are implemented and maintained. However, this is to a certain extend being removed by having a good and central mechanism to control local settings.  Solutions like Puppet, in combination with other tools, can play a great role managing local firewall rules on distributed servers.

When deploying Oracle Linux servers on the Oracle Public Cloud you will be able to use (depending on your version) iptables or firewalld as a local firewall. Both are core Linux local firewalls and are implemented and used throughout the industry and are seen as a great implementation for local firewalls.

Painting the picture
The above examples are pointing all to network related security, however, they are just an example to illustrate that “old fashion” security rules and best practices should not be forgotten when moving to the cloud. Some people try to diminish the need for security and claim it will hold back speed and agility of operations in a cloud environment.

The claim that good and proper security is holding back the speed and agility in cloud is an incorrect claim. Ensuring that your cloud deployment processes are able to include the proper level of security will not take any speed away from using a cloud based platform.

When looking at the Oracle Cloud, one can use

  • API’s to control and implement network firewall rules and by doing so ensure that deployed servers are always, automatically, placed in the correct location from a network segment and zone/tier point of view
  • Puppet / chef based solutions  will enable you to control local firewalls in your landscape and ensure changes are distributed easily 
  • Security hardened templates will enable you to deploy secured and hardened operating systems without the need to manually harden them


Common sense and architecture
However, the most important thing is to ensure you apply common sense and ensure you do your architecture correctly, which should always include a healthy portion of security considerations as part of its foundation.

The majority of the hacked MongoDB servers could have been secured with just a little more thought and common sense. Ensuring you do not forget the rules created in the pre-cloud era and thinking how you can apply them in the cloud before hitting the default “deploy button” can mean the difference between losing your data and being able to sleep well at night without having to worry.