Wednesday, October 18, 2017

Oracle Linux - Check your kernel modules

Knowing and understanding what is running on your Oracle Linux system is vital for proper maintenance and proper tuning. As operating systems are seen more and more as something that is just there and should not be hindrance for development, as we see the rise of container based solutions and serverless computing it might look like that the operating system becomes less and less important. However, the opposite is true, the operating system becomes more and more important as it need to be able to facilitate all the requirements from the containers and functions running on top of it without human intervention or at least as less human intervention as possible.

This brings that, if you operate a large deployment of servers and you have to ensure everything is automated and operating at the best of performance at any moment in time without having to touch the systems or at least as less as possible, you need to optimize it and automate it. To be able to do so you need to be able to understand every component and be able to check if you need it or that you can drop it. Whatever you do not need, drop it, it can be a security risk or it can be a consumer of resources without having the need for it.

Oracle Linux Kernel modules
Kernel modules are an important part of the Oracle Linux operating system, understanding them and being able to check what is loaded and what is not should be something that you need to understand. Kernel modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the kernel without the need to reboot the system.

udev
Today, all necessary modules loading is handled automatically by udev, so if you do not need to use any out-of-tree kernel modules, there is no need to put modules that should be loaded at boot in any configuration file. However, there are cases where you might want to load an extra module during the boot process, or blacklist another one for your computer to function properly.

Kernel modules can be explicitly loaded during boot and are configured as a static list in files under /etc/modules-load.d/. Each configuration file is named in the style of /etc/modules-load.d/.conf. Configuration files simply contain a list of kernel modules names to load, separated by newlines. Empty lines and lines whose first non-whitespace character is # or ; are ignored.

lsmod
Checking which kernel modules are loaded in the kernel can be done by using the lsmod command. lsmod will list all the modules. Basically it is a representation of everything you will find in the /proc/modules file however in a somewhat more understandable way. An example of the lsmod command on an Oracle Linux system running in a Vagrant box is shown below:

[root@localhost ~]# lsmod
Module                  Size  Used by
vboxsf                 38491  1 
ipv6                  391530  20 [permanent]
ppdev                   8323  0 
parport_pc             21178  0 
parport                37780  2 ppdev,parport_pc
sg                     31734  0 
pcspkr                  2094  0 
i2c_piix4              12269  0 
snd_intel8x0           33895  0 
snd_ac97_codec        127589  1 snd_intel8x0
ac97_bus                1498  1 snd_ac97_codec
snd_seq                61406  0 
snd_seq_device          4604  1 snd_seq
snd_pcm               113293  2 snd_intel8x0,snd_ac97_codec
snd_timer              26196  2 snd_seq,snd_pcm
snd                    79940  6 snd_intel8x0,snd_ac97_codec,snd_seq,snd_seq_device,snd_pcm,snd_timer
soundcore               7412  1 snd
e1000                 134545  0 
vboxvideo              42469  1 
ttm                    88927  1 vboxvideo
drm_kms_helper        120123  1 vboxvideo
drm                   343055  4 vboxvideo,ttm,drm_kms_helper
i2c_core               53097  3 i2c_piix4,drm_kms_helper,drm
vboxguest             306752  3 vboxsf,vboxvideo
sysimgblt               2595  1 vboxvideo
sysfillrect             4093  1 vboxvideo
syscopyarea             3619  1 vboxvideo
acpi_cpufreq           12697  0 
ext4                  604127  2 
jbd2                  108826  1 ext4
mbcache                 9265  1 ext4
sd_mod                 36186  3 
ahci                   26684  2 
libahci                27932  1 ahci
pata_acpi               3869  0 
ata_generic             3811  0 
ata_piix               27059  0 
video                  15828  0 
dm_mirror              14787  0 
dm_region_hash         11613  1 dm_mirror
dm_log                  9657  2 dm_mirror,dm_region_hash
dm_mod                106591  8 dm_mirror,dm_log
[root@localhost ~]# 

This could be the starting point of investigating and finding out what is loaded and what is really needed, what is not needed and what might be a good addition in some cases.

modinfo
as you might not be checking your kernel modules on a daily basis you might not know which module is used for what purpose. In this case modinfo is coming to your reseque. If you want to know, for example, what the module snd_seq is used for you can check the details with modinfo as shown in the example below.

[root@localhost ~]# modinfo snd_seq
filename:       /lib/modules/4.1.12-61.1.28.el6uek.x86_64/kernel/sound/core/seq/snd-seq.ko
alias:          devname:snd/seq
alias:          char-major-116-1
license:        GPL
description:    Advanced Linux Sound Architecture sequencer.
author:         Frank van de Pol , Jaroslav Kysela 
srcversion:     88DDA62432337CC735684EE
depends:        snd,snd-seq-device,snd-timer
intree:         Y
vermagic:       4.1.12-61.1.28.el6uek.x86_64 SMP mod_unload modversions 
parm:           seq_client_load:The numbers of global (system) clients to load through kmod. (array of int)
parm:           seq_default_timer_class:The default timer class. (int)
parm:           seq_default_timer_sclass:The default timer slave class. (int)
parm:           seq_default_timer_card:The default timer card number. (int)
parm:           seq_default_timer_device:The default timer device number. (int)
parm:           seq_default_timer_subdevice:The default timer subdevice number. (int)
parm:           seq_default_timer_resolution:The default timer resolution in Hz. (int)
[root@localhost ~]#

As you can see in the example above the snd_seq module is the Advanced Linux Sound Architecture sequencer developed by Frank van de Pol and Jaroslav Kysela. Taking this as an example, you can argue. do I need the snd_seq module if I run a server where I have no need for any sound.

Unloading "stuff" you do not need will ensure you have a faster boot sequence timing of your system, less resource consumption and as every component carries a risk of having an issue.... with less components you have theoretically less possible bugs.

In conclusion
optimizing your system by checking which kernel models should be loaded and which could be left out on your Oracle Linux system. However, when you just use it for common tasks you might not want to spend to much time on it. However, if you are building your own image or investing time in building a fully automated way of deploying servers fast in a CI/CD manner you might want to spend time on making sure only the components you really need are in the system and nothing else.


Tuesday, October 10, 2017

Oracle Linux - Yum security plugin

Ensuring your Oracle Linux system is up to date with patches, and especially security patches can be a challenging task. Updating your system from a pure operating system point of view is not the main issue. A simple yum command will make sure that the latest versions are applied to your system.

The main challenge a lot of enterprises face is identifying which patches and updates are applicable and how they might affect applications running on the systems. For Oracle linux you will have an additional level of assurance that Oracle software will be working when applying updates from the official Oracle Linux repositories.

For software not developed by Oracle this assurance will not be that strict and you will face the same possible issues as you will have with other Linux distributions, like for example, RedHat.

A formal process of identifying what needs to be updated and after that ensuring the update will not break functionality should be in place. The first step in such a process is finding the candidates. A good way to find out which updates, security specific in our example, are available and could be applied is something that can be facilitated by yum itself

You can use the yum security plugin. Some of the options you can see mentioned below:

Plugin Options:
    --security          Include security relevant packages
    --bugfixes          Include bugfix relevant packages
    --cve=CVE           Include packages needed to fix the given CVE
    --bz=BZ             Include packages needed to fix the given BZ
    --sec-severity=SEVERITY
                        Include security relevant packages, of this severity
    --advisory=ADVISORY
                        Include packages needed to fix the given advisory

As an example you can use the below command which will show information on available updates.

[vagrant@localhost ~]$ yum updateinfo list
Loaded plugins: security
ELBA-2017-0891 bugfix         binutils-2.20.51.0.2-5.47.el6_9.1.x86_64
ELEA-2017-1432 enhancement    ca-certificates-2017.2.14-65.0.1.el6_9.noarch
ELSA-2017-0847 Moderate/Sec.  curl-7.19.7-53.el6_9.x86_64
ELBA-2017-2506 bugfix         dhclient-12:4.1.1-53.P1.0.1.el6_9.1.x86_64
ELBA-2017-2506 bugfix         dhcp-common-12:4.1.1-53.P1.0.1.el6_9.1.x86_64
ELBA-2017-1373 bugfix         initscripts-9.03.58-1.0.1.el6_9.1.x86_64
ELBA-2017-2852 bugfix         initscripts-9.03.58-1.0.1.el6_9.2.x86_64
ELSA-2017-0892 Important/Sec. kernel-2.6.32-696.1.1.el6.x86_64
ELSA-2017-1372 Moderate/Sec.  kernel-2.6.32-696.3.1.el6.x86_64
ELSA-2017-1486 Important/Sec. kernel-2.6.32-696.3.2.el6.x86_64
ELSA-2017-1723 Important/Sec. kernel-2.6.32-696.6.3.el6.x86_64
ELBA-2017-2504 bugfix         kernel-2.6.32-696.10.1.el6.x86_64
ELSA-2017-2681 Important/Sec. kernel-2.6.32-696.10.2.el6.x86_64
ELSA-2017-2795 Important/Sec. kernel-2.6.32-696.10.3.el6.x86_64

In case you want to see only the security related updates with a severity Moderate you can use the below command to generate this list:

[vagrant@localhost ~]$ yum updateinfo list --sec-severity=Moderate
Loaded plugins: security
ELSA-2017-0847 Moderate/Sec. curl-7.19.7-53.el6_9.x86_64
ELSA-2017-1372 Moderate/Sec. kernel-2.6.32-696.3.1.el6.x86_64
ELSA-2017-2863 Moderate/Sec. kernel-2.6.32-696.13.2.el6.x86_64
ELSA-2017-2863 Moderate/Sec. kernel-headers-2.6.32-696.13.2.el6.x86_64
ELSA-2017-0847 Moderate/Sec. libcurl-7.19.7-53.el6_9.x86_64
ELSA-2017-2563 Moderate/Sec. openssh-5.3p1-123.el6_9.x86_64
ELSA-2017-2563 Moderate/Sec. openssh-clients-5.3p1-123.el6_9.x86_64
ELSA-2017-2563 Moderate/Sec. openssh-server-5.3p1-123.el6_9.x86_64
ELSA-2017-1574 Moderate/Sec. sudo-1.8.6p3-29.el6_9.x86_64
updateinfo list done
[vagrant@localhost ~]$ 

To list the security errata by their Common Vulnerabilities and Exposures (CVE) IDs instead of their errata IDs, specify the keyword cves as an argument:

[vagrant@localhost ~]$ yum updateinfo list cves
Loaded plugins: security
 CVE-2017-2628    Moderate/Sec.  curl-7.19.7-53.el6_9.x86_64
 CVE-2017-2636    Important/Sec. kernel-2.6.32-696.1.1.el6.x86_64
 CVE-2016-7910    Important/Sec. kernel-2.6.32-696.1.1.el6.x86_64
 CVE-2017-6214    Moderate/Sec.  kernel-2.6.32-696.3.1.el6.x86_64
 CVE-2017-1000364 Important/Sec. kernel-2.6.32-696.3.2.el6.x86_64
 CVE-2017-7895    Important/Sec. kernel-2.6.32-696.6.3.el6.x86_64
 CVE-2017-1000251 Important/Sec. kernel-2.6.32-696.10.2.el6.x86_64
 CVE-2017-1000253 Important/Sec. kernel-2.6.32-696.10.3.el6.x86_64
 CVE-2017-7541    Moderate/Sec.  kernel-2.6.32-696.13.2.el6.x86_64

When checking (automated) what patches are applicable the question why is very reasonable. Meaning, you would like to have some more information on the background of patches. For this you can do a "yum updateinfo info" command or you can specifically query for a CVE ID. The CVE ID example is shown below in an example:

[vagrant@localhost ~]$ yum updateinfo info --cve CVE-2017-1000251
Loaded plugins: security

=====================================================
   kernel security and bug fix update
=====================================================
  Update ID : ELSA-2017-2681
    Release : Oracle Linux 6
       Type : security
     Status : final
     Issued : 2017-09-13
       CVEs : CVE-2017-1000251
Description : [2.6.32-696.10.2.OL6]
            : - Update genkey [bug 25599697]
            : 
            : [2.6.32-696.10.2]
            : - [net] l2cap: prevent stack overflow on incoming
            :   bluetooth packet (Neil Horman) [1490060 1490062]
            :   {CVE-2017-1000251}
   Severity : Important

=====================================================
   Unbreakable Enterprise kernel security update
=====================================================
  Update ID : ELSA-2017-3620
    Release : Oracle Linux 6
       Type : security
     Status : final
     Issued : 2017-09-19
       CVEs : CVE-2017-1000251
Description : kernel-uek
            : [4.1.12-103.3.8.1]
            : - Bluetooth: Properly check L2CAP config option
            :   output buffer length (Ben Seri)  [Orabug:
            :   26796363]  {CVE-2017-1000251}
   Severity : Important
updateinfo info done
[vagrant@localhost ~]$

By using the yum plugin in a correct way and automate against it you can leverage the power of this plugin and implement (an automated) process that will inform you about candidates for installation on your production systems.

Sunday, October 01, 2017

Oracle Cloud - IOT Enterprise Connectivity

Within the Oracle Cloud portfolio you will find the Oracle Internet Of Things (IOT) Cloud service. The IOT cloud service from Oracle provides a starting point for developing a IOT strategy within your company. Or, as Oracle likes to state: Oracle Internet of Things (IoT) Cloud Service is a managed Platform as a Service (PaaS) offering that helps you make critical business decisions and strategies by allowing you to connect your devices to the cloud, analyze data and alert messages from those devices in real time, and integrate your data with enterprise applications, web services, or with other Oracle Cloud Services, such as Oracle Business Intelligence Cloud Service.

One of the main pillars within the Oracle IOT strategy, and a right one in my opinion, is that you will have to connect your IOT strategy to your Enterprise Solutions. Connecting them to your enterprise solutions can be for many reasons. For example, integrating it with preventive maintenance and/or customer satisfaction programs... just to name two options.



If you look at the above diagram you will notice that Enterprise Connectivity is placed as a central part of the Oracle IOT Cloud Service.

Oracle IoT Cloud Service provides a secure communication channel for pushing messages to your enterprise applications, and for your enterprise applications to push or pull messages from Oracle IoT Cloud Service. The Oracle IoT Cloud Service Client Software Enterprise Library and REST APIs enable your enterprise applications to send commands to your devices. You can further analyze the device data and alerts sent to Oracle IoT Cloud Service by integrating your IoT application to your enterprise applications, Oracle Business Intelligence Cloud Service, Oracle Mobile Cloud Service, or JD Edwards EnterpriseOne with Internet of Things Orchestrator instances. 

REST based API connections
The beauty of connecting an enterprise application with the Oracle IOT Cloud service is that this can be done fully based upon REST API's exchanging JSON based messages with each other. This means that you can leverage all the API best practices and could leverage all the microservice best practices. Communication will be based upon API's supported by workflows within the Oracle IOT cloud Service.

Using a combination of stream processing and REST based API's you can make sure that certain events you receive from connected devices result in a JSON message being send to your enterprise application (or for example, to a user his mobile device who has a mobile APP installed).

Communicating back to the IOT cloud works in the same way, you can have your applications interact with the Oracle IOT Cloud service and, for example, querying device data and metadata, or send commands to devices.

Building a new model
Having the option to connect from and to the Oracle IOT Cloud service in a loosely coupled way using REST API's makes that complete new models are possible. You will be able to read data coming from connected devices. However, you are also able to directly connect this to processes downstream as well and sending instructions back to devices from the backend systems.

Whenever you are working on a solution which will involve IOT components it might be worth it to have a good look at the Oracle IOT solution as this could potentially bring you a lot of value from day one. 

Oracle Cloud Access Security Broker

"A cloud access security broker (CASB) is a software tool or service that sits between an organization's on-premises infrastructure and a cloud provider's infrastructure. A CASB acts as a gatekeeper, allowing the organization to extend the reach of their security policies beyond their own infrastructure."

Oracle Cloud Access Security Broker is used for exactly that. The Oracle CASB Cloud Service is the only Cloud Access Security Broker (CASB) that gives you both visibility into your entire cloud stack and the security automation tool your IT team needs.

Read more the Capgemini view on Oracle CASB via this link or view the presentation below., an overview created by Adriaan van Zetten and Johan Louwers.

Wednesday, September 20, 2017

Oracle Jet - preparing Oracle Linux for Oracle Jet Development

Oracle JavaScript Extension Toolkit (JET) empowers developers by providing a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles. Oracle JET is targeted at intermediate to advanced JavaScript developers working on client-side applications. It's a collection of open source JavaScript libraries along with a set of Oracle contributed JavaScript libraries that make it as simple and efficient as possible to build applications that consume and interact with Oracle products and services, especially Oracle Cloud services.

When developing Oracle Jet based solutions you can decide to use your local workstation for the development work, or you could opt to use virtual machine on your laptop. In this case we will be using a virtual Oracle Linux system which we created using vagrant and the Vagrant boxes provided by Oracle. To see a more detailed description on how to get Oracle Linux started with Vagrant you can refer to this specific blogpost.

Preparing your Oracle Jet Development system
To get started with Oracle Jet on a fresh Oracle Linux installation you will need to undertake a couple of steps outline below. The steps include;
  • Install Linux development tools
  • Install Node.JS
  • Install Yeoman
  • Install Grunt
  • Install Oracle JET Yeoman Generator

Install Linux development tools
for some of the Node.JS and Yeoman modules it i required to have a set of Linux development tools present at your machine. You can install them by using a simple YUM command as shown below:

yum install gcc-c++ make

Install Node.JS
The installation of Node.JS starts with ensuring you have the proper repositories in place. This can be done with a single command as shown below:

curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -

After this you can do the actual installation of Node.JS using yum as shown below:

yum -y install nodejs

Install Yeoman
After the installation of Node.JS you should have NPM on your system and you should be able to install Yeoman. Yeoman is a generic scaffolding system allowing the creation of any kind of app. It allows for rapidly getting started on new projects and streamlines the maintenance of existing projects. Yeoman is language agnostic. It can generate projects in any language (Web, Java, Python, C#, etc.) Yeoman by itself doesn't make any decisions. Every decision is made by generators which are basically plugins in the Yeoman environment.

You can install Yeoman with a single NPM command as shown below:
npm install -g yo

Install Grunt
After the installation of Node.JS you should have NPM on your system and you should be able to install Grunt. Grunt is a JavaScript task runner, a tool used to automatically perform frequently used tasks such as minification, compilation, unit testing, linting, etc. It uses a command-line interface to run custom tasks defined in a file (known as a Gruntfile). Grunt was created by Ben Alman and is written in Node.js.

You can install Grunt with a single NPM command as shown below:
npm install -g grunt-cli

Install Oracle JET Yeoman Generator
After the installation of Node.JS you should have NPM on your system and you should be able to install the Oracle JET Generator for Yeoman.

You can install the Oracle JET Yeoman Generator with a single NPM command as shown below:
npm install -g generator-oraclejet

Verify the installation
To verify the installation you can use the below command to see what is installed by NPM and you can try and run Yeoman.

To check what is installed you can use the NPM command in the way shown below:
[root@localhost ~]# npm list -g --depth=0
/usr/lib
├── generator-oraclejet@3.2.0
├── grunt-cli@1.2.0
├── npm@5.3.0
└── yo@2.0.0

[root@localhost ~]# 

After this you can try to start Yeoman in the way shown below (do not run yo as root).

[vagrant@localhost ~]$ yo
? 'Allo! What would you like to do? 
  Get me out of here! 
  ──────────────
  Run a generator
❯ Oraclejet 
  ──────────────
  Update your generators 
  Install a generator 
(Move up and down to reveal more choices)

If both are giving the result expected you should be ready to get started with your first Oracle Jet project.

Thursday, August 31, 2017

Oracle Linux - ClusterShell

When operating large clusters consisting out of large numbers of nodes the desire to be able to execute a command on all, or a subset of nodes, comes quickly. You might want for example to run certain commands on all nodes without having to login to the nodes. When doing configuration solutions like Ansible or Puppet are very good solutions to use. However, for day to day operations they might not be sufficient and you would like to have the option of a distributed shell.

A solution for this is building your own tooling, or you can adopt a solution such as ClusterShell. ClusterShell is a scalable Python Framework, however it is a lot more than that. In the simplest form of usage it is a way to execute commands on groups of nodes in your cluster with a single command. That leaves open the option to do a lot more interesting things with it when you start to look into the options of hooking into the Python API’s and build your own distributed solutions with ClusterShell as a foundation for this.

Installing ClusterShell on Oracle Linux is relative easy and can be done by using the EPEL repository for YUM. Just ensure you have the EPEL repository available. If you have the EPEL respository for Oracle Linux installed you should be able to have the file /etc/yum.repos.d/epel.repo which (in our case, contains the following repository configuration:

[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

If you do not have this you will have to make sure you locate and download the appropriate epel-release-x-x.noarch.rpm file http://download.fedoraproject.org/pub/epel/ . As an example, you could download the file and install it as shown below:

# wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm
# rpm -ivh epel-release-6-5.noarch.rpm

Now you should be able to use YUM to install ClusterShell on Oracle Linux, this can be done by executing the below yum command:

yum install clustershell

To test the installation you can, as an example, execute the below command to verify if clush is installed. Clush is a part of the full ClusterShell installation and being able to interact with it is a good indication of a successful installation.

[root@localhost /]# clush --version
clush 1.7.3
[root@localhost /]# 

To make full use of ClusterShell you will have to start defining your configuration and the nodes you want to be able to control with ClusterShell. The main configuration is done in the configurations file located at: /etc/clustershell . A basic installation should give you the below files in this loaction:

[root@localhost clustershell]# tree /etc/clustershell/
/etc/clustershell/
├── clush.conf
├── groups.conf
├── groups.conf.d
│   ├── genders.conf.example
│   ├── README
│   └── slurm.conf.example
├── groups.d
│   ├── cluster.yaml.example
│   ├── local.cfg
│   └── README
└── topology.conf.example

2 directories, 9 files
[root@localhost clustershell]# 

Friday, August 25, 2017

Oracle Linux - Install Ansible

Ansible is an open-source automation engine that automates software provisioning, configuration management, and application deployment. Ansible is based upon a push mechanism where you will push configurations to the servers rather than pulling them as is done by, for example, puppet. When you want to start using Ansible the first step required will be configuring that central location from where you will push the Ansible configurations.  Installing Ansible on a Oracle Linux machine is rather straight forward and can be achieved by following the below steps.

Step 1
To be able to install Ansible via the YUM command you will have to ensure that you have the EPEL release RPM installed which will take care of ensuring that you have the fedora YUM repository in place. This is due to the fact that the RPM's for ansible are placed on the fedora repository.

You can do so by first executing a wget to download the file and than install it with the RPM command:

wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm


rpm -ivh epel-release-6-8.noarch.rpm 

If done correct you will now have something like the below in your YUM repository directory:

[root@localhost ~]# ls -la /etc/yum.repos.d/
total 24
drwxr-xr-x.  2 root root 4096 Aug 25 09:22 .
drwxr-xr-x. 63 root root 4096 Aug 25 08:36 ..
-rw-r--r--   1 root root  957 Nov  5  2012 epel.repo
-rw-r--r--   1 root root 1056 Nov  5  2012 epel-testing.repo
-rw-r--r--.  1 root root 7533 Mar 28 10:13 public-yum-ol6.repo
[root@localhost ~]# 

if you check the epel.repo file you should have at least the "Extra packages for Enterprise Linux 6" channel active. You can see this in the example below:

[root@localhost ~]# cat /etc/yum.repos.d/epel.repo 
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1
[root@localhost ~]# 

Steps 2
As soon as you have completed the needed steps in step 1 you should be able to do an installation of Ansible on Oracle Linux by executing a simple yum install command.

yum install ansible

Step 3
In basic your installation should be done and Ansible should be available and ready to be configured. To ensure you have the installation right you can conduct the below test to verify.

[root@localhost init.d]#  ansible localhost -m ping
 [WARNING]: provided hosts list is empty, only localhost is available

localhost | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
[root@localhost init.d]# 

Oracle Linux - inspect memory fragments with buddyinfo

The file /proc/buddyinfo is used primarily for diagnosing memory fragmentation issues. Using the buddy algorithm, each column represents the number of pages of a certain order (a certain size) that are available at any given time. You get to view the free fragments for each available order, for the different zones of each numa node.

The content of /proc/buddinfo as shown below will show you the number of free memory chunks. You have to read the numbers from left to right where the first column each value is 2^(0*PAGE_SIZE) the second is 2^(1*PAGE_SIZE) etc ect.

An example of the content of the buddyfile on Oracle Linux 6 can be seen below:

[root@jenkins proc]# cat buddyinfo 
Node 0, zone      DMA     15     32     84     24      6      5      2      0      0      0      0 
Node 0, zone    DMA32    604    342    165     64     28     10     15      2      1      0      0 
[root@jenkins proc]#

Friday, August 04, 2017

Oracle Linux - Intuition Engineering and Site Reliability Engineering with Elastic and Vizceral

IT operations are vital to organisations, in daily business operations a massive system disruption will halt an entire enterprise. Running and operating massive scale IT deployments who are to big to fail takes more than how it is done traditionally. Next to DevOps we see the rise of Site Reliability Engineering, originally pioneered by Google, and complemented with Intuition Engineering, pioneered by Netflix. You see more and more companies who have IT which is to big to fail turn to new concepts of operation.  By developing new ways of operation proven ways are adopted and improved.

Site Reliability Engineering
According to Ben Treynor, VP engineering at Google Site Reliability Engineering is the following;
"Fundamentally, it's what happens when you ask a software engineer to design an operations function. When I came to Google, I was fortunate enough to be part of a team that was partially composed of folks who were software engineers, and who were inclined to use software as a way of solving problems that had historically been solved by hand. So when it was time to create a formal team to do this operational work, it was natural to take the "everything can be treated as a software problem" approach and run with it.

So SRE is fundamentally doing work that has historically been done by an operations team, but using engineers with software expertise, and banking on the fact that these engineers are inherently both predisposed to, and have the ability to, substitute automation for human labor.

On top of that, in Google, we have a bunch of rules of engagement, and principles for how SRE teams interact with their environment -- not only the production environment, but also the development teams, the testing teams, the users, and so on. Those rules and work practices help us to keep doing primarily engineering work and not operations work."

Intuition Engineering
An addition to Site Reliability Engineering can be Intuition Engineering. Intuition Engineering is providing a Site Reliability Engineer with with information in way that it appeals to the brain’s capacity to process massive amounts of visual data in parallel to give users an experience -- a sense, an intuition -- of the state of a holistic system, rather than objective facts. An example of a Intuition Engineering tool is Vizceral developed by Netflix and discussed by Casey Rosenthal, Engineering Manager at Netflix, Justin Reynolds and others in numerous talks. In the below video you can see Justin Reynolds give an introduction into Vizceral.


Implementing Vizceral
For small system footprints using Vizceral might be interesting however not that important for day to day operations. When operating a relative small number of servers and services it is relatively easy to locate an issue and make a decision. In cases where you have a massive number of servers and services it will be hard for a site reliability engineer to take in the vast amount of data and spot possible issues and take split second decisions. In deployments like this it can be very beneficial to implement Vizceral.

Even though Vizceral might look complicated at first glance it is in reality a relative simple however extremely well crafted solution which has been donated to the open source community by Netflix. The process of getting the right data into Vizceral to provide the needed view of the now is the more complex task.

The below image shows a common implementation where we are running a large number of Oracle Linux nodes. All nodes have a local Elastic Beat to collect logs and data and ship this to Elasticsearch where Site Reliability Engineers can use Kibana to get insight in all data from all servers.



Even though Elasticsearch and Kibana in combination with Logstash and Elastic Beats provide a enormous benefit to Site Reliability Engineers they can even still be overwhelmed by the massive amount of data available and it can take time to find the root cause of an issue. As we are already collecting all data from all servers and services we would like to also feed this to Vizceral. The below image shows a reference implementation where we pull data from Elasticsearch and provide to Vizceral.



As you can see from the above image we have introduced two new components, the "Vizceral Feeder API" and "Netflix Vizceral". Both components are running a Docker Containers.

The Vizceral Feeder API
To extract the data we collected inside Elasticsearch and feed this to Vizceral we use the Vizceral Feeder API. The Vizceral Feeder API is an internal product which we hope to provide to the Open Source community at one point in the near future. In effect the API is a bridge between Elasticsearch and Vizceral.

The Vizceral Feeder API will query Elasticsearch for all the required information. Based upon the dataset returned a Vizceral JSON file is created compatible with Vizceral.

Depending on your appetite to modify Vizceral you can have Vizceral pull the JSON file from the Feeder API every x seconds or you can have a secondary process pull the file from the Feeder and place it locally in the Docker container hosting Vizceral.

If you are not into developing your own addition to Vizceral and would like to be up and running relatively fast you should go for the local file replacement strategy.

If you go for the solution in which Vizceral will pull the JSON from the feeder you will have to make sure that you take the following into account;

  • The Vizceral Feeder API needs to be accessible by the workstations used by the Site Reliability Engineers 
  • The JSON file needs to be presented with the Content-type: application/json header to ensure the data is seen as true JSON
  • The JSON file needs to be presented with the Access-Control-Allow-Origin: * header to ensure it is CORS compatible

Thursday, August 03, 2017

Oracle Linux - enable Docker daemon socket option

Installing Docker on a Oracle Linux instance is relative easy and you can get things to work extremely fast and easy. Within a very short timeframe you will have your Docker engine running and you first containers up and running. However, at one point in time you do want to start interacting with docker in a more interactive manner and not only use the docker command from the CLI. In a more integrated situation you do want to communicate over an API with Docker.

In our case the need was to have Jenkins build a Maven project with would build a Docker container with the help from the Docker Maven Plugin build by the people at Spotify. The first run we did hit an issue stating that the build failed with the below message:

[INFO] I/O exception (java.io.IOException) caught when processing request to {}->unix://localhost:80: Permission denied
[INFO] Retrying request to {}->unix://localhost:80

The message need to be solved by taking two steps, (1) ensuring you have your docker Daemon listening on an external socket and (2) ensuring you set an environment variable.

Setting the Docker daemon socket option:
To ensure the docker daemon will listen, on port 2375 you have to make some changes to /etc/sysconfig/docker , location of this configuration file differs per Linux distribution however on Oracle Linux you will need this file.

You will have to ensure that other_args is stating that you want to run the daemon sockets. In the below example we have made the explicit configuration that it needs to run on the localhost IP and the external IP of the docker host.

other_args="-H tcp://127.0.0.1:2375 -H tcp://192.168.56.4:2375 -H unix:///var/run/docker.sock"

Setting DOCKER_HOST environment variable:
To make sure that Jenkins knows where to find the Docker API you will have to set the DOCKER_HOST environment variable. You can do so from the command line with the below command:

export DOCKER_HOST="tcp://192.168.56.4:2375"

Even though the above export works, if you would only need this for Jenkins you can also set a global environment var within Jenkins. Setting it in Jenkins when you only need it in Jenkins might be a better idea. You can set global environment variables within Jenkins under "Manage Jenkins" -"Configure System" - "Global Properties"

Now, when you run a build the build should connect to docker on port 2375 (not 80) and the build should finish without any issue. 

Oracle Linux - IPv4 forwarding is disabled. Networking will not work

Using Docker for the first time can be confusing, especially on the networking part. When you run Docker for the first time on a vanilla Oracle Linux instance you might be hitting a networking issue the first time you start a container and try to do network forwarding. By default IPv4 forwarding is disabled and should be set to enabled to make use of Docker in the right way.

The below error might be what you are facing when starting your first docker container on Oracle Linux:

WARNING: IPv4 forwarding is disabled. Networking will not work.

To resolve this issue you will to make changes to the configuration of your Docker host OS. In our case we run a Oracle Linux operating system with the Docker engine on top of it. To ensure you have forwarding active you will have to change setting in /etc/sysctl.conf . By default you will have the following:

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

You will have to change this into 1 as shown below

# Controls IP packet forwarding
net.ipv4.ip_forward = 1 

As soon as you have ensured the new settings are active, and only after you made sure they are active, your Docker containers should start without any issue.

Monday, July 24, 2017

Oracle code - Jenkins check if file is present in workspace

When using Jenkins to automate parts of your build and deployment work in a CI/CD manner you do want to include certain failsafe manners. A common ask is to check if a certain file is present in your Jenkins workspace. In our example, we do pull code from a Gitlab repository to build a Maven based project. One of the first things we would like to ensure is that the pom.xml file is present. In case the pom.xml file is not present we know that the build will fail and we will never come to a position in which we can build the required .jar file for our project.

To check if a file is present you can use the below example

if (fileExists('pom.xml')) {
    echo 'Yes'
} else {
    echo 'No'
}

As you can see this is fairly straightforward check which will check if pom.xml is present. In case it is not present it will print "No", in case it is present it will print "Yes". In a realworld example you do want to take some action on this instead of printing that the file is not present, you could have the desire to abort the build. The below example could be used to do so

    currentBuild.result = 'ABORTED'
    error('pom.xml file has NOT been located')

The above example code will abort the Jenkins job and will give the error that the pom.xml file has not been found. The more complete example is shown below:

if (fileExists('pom.xml')) {
    echo 'Yes'
} else {
    currentBuild.result = 'ABORTED'
    error('pom.xml file has NOT been located')
}

Ensuring that you have checks like this in place will make the outcome of Jenkins more predictable and can safe you a lot of issues in a later stage. In reality, a large part of some of our code in Jenkins is often to make sure everything is in place and is doing what it is expected to do. Checking and error handling is a big part of automation. 

Sunday, July 23, 2017

Oracle Code - Jenkins failed to build maven project

The first time I did try to build a Oracle Java project with Maven it resulted in an error. Which is not surprising, every time you try to do something the first time the changes that it will not work are relative high. In my case I intended to build a REST API build with Spring and compile it with Maven in Jenkins. The steps Jenkins should undertake where, get the code from my local gitlab repository and build the code as I would do in a normal situation. The code I used is exactly the same code as I have shared on github for your reference.

The main error I received when starting the actual build with Maven was the one shown below:

[ERROR] No goals have been specified for this build. You must specify a valid 
lifecycle phase or a goal in the format : or :[:]:. Available lifecycle phases are: 
validate, initialize, generate-sources, process-sources, generate-resources, 
process-resources, compile, process-classes, generate-test-sources, 
process-test-sources, generate-test-resources, process-test-resources, test-compile, 
process-test-classes, test, prepare-package, package, pre-integration-test, 
integration-test, post-integration-test, verify, install, deploy, pre-clean, clean, 
post-clean, pre-site, site, post-site, site-deploy. -> [Help 1]

If we look at my githib page you can already see a hint for the solution. In the documentation I stated the following command for creating the actual .jar file (the result I wanted from Jenkins)

mvn clean package

If we look at how the project was defined in Jenkins, I left the "goals" section empty. Added package to the goals section resolved the issue and the next time I started the job I was presented with a successfull completed job and a fully compiled .jar file capable of being executed and server me the needed REST API.

As you can see from the error message, a lot of other goals can also be specified.




Oracle Linux - Configure Jenkins for Maven

When you are working a lot with Oracle Java and you have the ambition to start developing your Java applications with Maven in a manner that you can automate a lot of the steps by leveraging Jenkins you will have to configure Jenkins. The use of Jenkins in combination with Maven can speed up your continuous integration and continuous deployment models enormously.

I already posted an article on how to install Jenkins on Oracle Linux in another post on this weblog, you can find the original post here. Originally the post was coming from a project where we did not use Maven, we did use Jenkins for some other tasks. However, now the need arises to use Maven as well.

Configuring Maven under Jenkins is relative easy, you can use the "global tool configuration" menu under Jenkins to make the needed configurations. Advisable is to not have Jenkins make the installation however install Maven manually and after that configure it into Maven.

The common error
The common error when configuring Maven is that you tend to define the location of mvn as the maven home the first time you look at this. In our case mvn was located in /usr/bin on our Oracle Linux instance. However, stating /usr/bin as the maven home resulted in the error : /usr/bin doesn’t look like a Maven directory

Finding the maven home
As we just found out that /usr/bin is not the maven home we have to find the correct maven home. The resolution can be found in the mvn --version command as shown below

[root@jenkins /]#
[root@jenkins /]# mvn --version
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00)
Maven home: /usr/share/apache-maven
Java version: 1.8.0_141, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-2.b16.el6_9.x86_64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.1.12-61.1.33.el6uek.x86_64", arch: "amd64", family: "unix"
[root@jenkins /]#
[root@jenkins /]#

As you can see the Maven home is stated in the output. Providing the Maven home /usr/share/apache-maven to Jenkins will ensure you will have configured maven correctly.

Saturday, July 22, 2017

Oracle Linux - changing the amount of memory of your Vagrant box

Vagrant is an open-source software product build by HashiCorp for building and maintaining portable virtual development environments. The core idea behind its creation lies in the fact that the environment maintenance becomes increasingly difficult in a large project with multiple technical stacks. Vagrant manages all the necessary configurations for the developers in order to avoid the unnecessary maintenance and setup time, and increases development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in almost all major languages.

I use Vagrant a lot, really a lot, and especially in combination with Oracle LinuxOracle ships a number of default vagrant boxes from within oracle.com which speeds up the development, test and experimental way of working a lot. Without having the need to manually maintain local clones of Oracle virtualbox images you can now use vagrant to extremely fast run Oracle Linux instances.  A short guide on how to get started with vagrant can be found in this specific blogpost on my blog.

When you run a box for a short time you might not be that interested in memory tuning as long as it works. However , if you need to run multiple boxes for a longer periode of time as part of a wider development ecosystem you do want to ensure that all the boxes fit in your development system and you still have some free memory left to do actual things.

A default box is taking a relative large part of the memory of your host. Tuning this memory to what it actually should be is relatively easy. In our example the Oracle Linux 6.9 box starts by default using 2048MB of memory. We wanted to trim this down to 1024. To state the exact amount of memory you need to configure some parts in your Vagrantfile config file.

The below example we added to the Vagrantfile defined the amount of memory that could be given to the box:

config.vm.provider "virtualbox" do |vb|
  vb.memory = "1024"
end

This would make that the box will be given only 1024. Additional you can pass other configuration for example if want to provide only 1 cpu you could also add the below line right after the vb.memory line to do so.

v.cpus = 2

Understanding and using the Vagrantfile configuration options will help you in building and tuning your boxes in the most ideal way to have the best development environment you can imagine on your local machine.

Friday, July 21, 2017

Oracle Linux - Change hostname for Vagrant host

Vagrant is an open-source software product build by HashiCorp for building and maintaining portable virtual development environments. The core idea behind its creation lies in the fact that the environment maintenance becomes increasingly difficult in a large project with multiple technical stacks. Vagrant manages all the necessary configurations for the developers in order to avoid the unnecessary maintenance and setup time, and increases development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in almost all major languages.

I use Vagrant a lot, really a lot, and especially in combination with Oracle Linux. Oracle ships a number of default vagrant boxes from within oracle.com which speeds up the development, test and experimental way of working a lot. Without having the need to manually maintain local clones of Oracle virtualbox images you can now use vagrant to extremely fast run Oracle Linux instances.  A short guide on how to get started with vagrant can be found in this specific blogpost on my blog.

When you do a default start of a Vagrant box, in our example an Oracle Linux 6.9 instance we will see that the hostname is not explicitly stated. In most cases this is not an issue, however, in some cases the hostname is a vital part of how your software might work. The most common way is changing the hostname by changing it directly within the Oracle Linux operating system. However, a better way of doing things when working with Vagrant is to do it by editing the Vagrantfile config file which can be found in the directory where you did a "vagrant init".

Change hostname in Vagrantfile
When using vagrant you should use the power of vagrant. This means, if you want your machine to have a specific hostname you can do so by changing the Vagrantfile instead of doing it on the Oracle Linux operating system within the box when it is running. If you read the Vagrant documentation you will find the following on this subject :

"config.vm.hostname - The hostname the machine should have. Defaults to nil. If nil, Vagrant will not manage the hostname. If set to a string, the hostname will be set on boot. "

if we take for example a running box which we initiated without having done anything for the hostname in the Vagrantfile you will notice the hostname is localhost.

[vagrant@localhost ~]$ 
[vagrant@localhost ~]$ uname -a
Linux localhost 4.1.12-61.1.33.el6uek.x86_64 #2 SMP Thu Mar 30 18:39:45 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@localhost ~]$ 
[vagrant@localhost ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

[vagrant@localhost ~]$ 

If we want to have a box named somehost.example.com we could ensure we have the below line in our Vagrantfile config file when we start it:

config.vm.hostname = "somehost.example.com"

When you would login to the Oracle Linux operating system within the box and you would check the same as in the above example you would be able to see the difference;

[vagrant@somehost ~]$ 
[vagrant@somehost ~]$ uname -a
Linux somehost.example.com 4.1.12-61.1.33.el6uek.x86_64 #2 SMP Thu Mar 30 18:39:45 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@somehost ~]$ 
[vagrant@somehost ~]$ cat /etc/hosts
127.0.0.1 somehost.example.com somehost
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[vagrant@somehost ~]$ 

As you can see, changing the Vagrantfile will change the hostname within the box. Instead of changing it manually you should use the power of Vagrant to state the correct hostname in your Oracle Linux instance when using Vagrant.

Oracle Linux - using vagrant boxes with a static IP

Vagrant is an open-source software product build by HashiCorp for building and maintaining portable virtual development environments. The core idea behind its creation lies in the fact that the environment maintenance becomes increasingly difficult in a large project with multiple technical stacks. Vagrant manages all the necessary configurations for the developers in order to avoid the unnecessary maintenance and setup time, and increases development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in almost all major languages.

I use Vagrant a lot, really a lot, and especially in combination with Oracle Linux. Oracle ships a number of default vagrant boxes from within oracle.com which speeds up the development, test and experimental way of working a lot. Without having the need to manually maintain local clones of Oracle virtualbox images you can now use vagrant to extremely fast run Oracle Linux instances.  A short guide on how to get started with vagrant can be found in this specific blogpost on my blog.

The main confusion on ports and ip addresses 
When I talk to people about Vagrant and running Oracle Linux, or any other box, in this system the main confusion comes from the networking side of things. In general the first confusion is how to be able to access ports running in the box from within your local machine. In effect Vagrant will do a port mapping of ports available on the operating system in your box to a specified port on localhost. That is, when you configure this in your Vagrantfile configuration file. (which I will dedicate another post on to explain).

The second confusion comes when people need to communicate between boxes. In those cases it would be very convenient. For example, if you would have one box running with an Oracle database while a secondary box would be running your application server you would like to be able to establish connectivity to the both of them.

Giving each box an external IP
the solution to this issue is providing each Vagrant box running your Oracle Linux instance an external IP address. A hint is already given in the Vagrantfile configuration file which resides in the directory where you gave a "vagrant init" command. If you read the file you will find a comment above a commented configuration line stating : "Create a private network, which allows host-only access to the machine using a specific IP.

I my example I wanted to give a specific box a specific IP address in a static manner. In this specific case the address needed to be 192.168.56.3 to be precise. This IP would become part of a private network which will only be accessible on my Macbook and can be accessed from my Macbook directly or from any other Vagrant box running on it. While you can choose any IP you would like, you should use an IP from the reserved private address space. These IPs are guaranteed to never be publicly routable, and most routers actually block traffic from going to them from the outside world.

To ensure my specific box would always run on 192.168.56.3 I had to uncomment the line and ensure that it would read as the line below:

 config.vm.network "private_network", ip: "192.168.56.3"

This binds the box via the config.vm.network to a private network with the specific IP we needed. If we now try to ping the box on this address it will respond and if I ping it. Also if I go into another box, for example a box with 192.168.56.2 and will try to ping 192.168.56.3 it will respond. Meaning, issue resolved and I have now two boxes who can freely communicate with each other without any issue.

Showing it in Oracle Linux
Now, if we have a look at the Oracle Linux operating system within the running box we can see we have a new interface for this specific address, as shown below:

eth1      Link encap:Ethernet  HWaddr 08:00:27:3D:A5:49  
          inet addr:192.168.56.3  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe3d:a549/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:86 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7328 (7.1 KiB)  TX bytes:1482 (1.4 KiB)

If we want to know how it gets the IP address inside of the Oracle Linux operating system and if you are wondering if this is done with some "hidden" DHCP server that binds to a specific virtual MAC address you can check the configuration by looking into the /etc/sysconfig/network-scripts/ifcfg-eth1 config file within the Oracle Linux operating system that runs within the Vagrant box. The content of the file is shown below:

#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.56.3
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
#VAGRANT-END

As you can see the file is generated by vagrant itself and no "hidden" DHCP trick is required. To push the generated file Vagrant is using parts of its own provisioning solution, which can be used for a lot more interesting things. 

Sunday, July 16, 2017

Oracle Linux - private build your docker images

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we do have the need to privately build our docker images and containers. The request for this is not that uncommon. As docker is used for a large part in enterprises the need to ensure a safe way of building your internal docker images loaded with your own code deployments is seen often. In our example case we will use github.com as the source repo as well as a local file and we will depend on certain images available on hub.docker.com however when you deploy a full private environment you should use your own git implementation and your own Docker registry.

Building with github
when we build based upon a Dockerfile in github we have to provide "docker build" with the location of the Dockerfile. Alternatively, if your Dockerfile is in the root of your project you can call it without the explicit reference to the Dockerfile. In the below example we use the explicit calling of the Dockerfile which is not always the best way of doing it.

We use the below command in our example:

docker build --no-cache=true --rm -t databases/mongodb:latest https://raw.githubusercontent.com/louwersj/docker_mongodb_ol6/master/mongodb_3.4/OL6.9/Dockerfile

This will result in the download of the the Dockerfile from github and the start of the build, as you can see we call it .raw. in the url to ensure we get the raw file. Additionally we use a couple of flags for the build:

--no-cache=true
this is used to ensure we do not use any cache. In some cases it can be useful to use the cache of previous builds, in cases you want to be a hunderd percent sure you use the latest of everything you should prevent the use of cache by using this flag.

--rm
This flag will ensure that all temporary data is removed after the build. If not you will find a lot of directories under /tmp which hold old build data. To ensure the system stays clean you should include this flag during the build operation.

-t databases/mongodb:latest
This is used to provide the right tagging to the newly build image. As you can see we indicate that this is part of the databases set and is a MongoDB tagged as latest.

As soon as the build has completed you can check the list of images within Docker to see if it is available, this is shown in the example below:

[root@localhost tmp]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
databases/mongodb   latest              185f6f594f9e        About a minute ago   251.4 MB
[root@localhost tmp]# 

Testing the new build 
Now we have build a new image we would like to test this image. This can be done in the same way as that you would normally create a container from an image.

[root@localhost tmp]# docker run --name mongodb_node_0 -d -p 27017 databases/mongodb:latest
154b9b82e43186411c614ebdc45cdd1c7cc98ec8c6b7af525474f880a8356d52
[root@localhost tmp]# 

If we would now check the running containers we will find the newly created container with the name mongdb_node_0 up and running

[root@localhost tmp]# docker ps
CONTAINER ID        IMAGE                      COMMAND             CREATED             STATUS              PORTS                      NAMES
154b9b82e431        databases/mongodb:latest   "/usr/bin/mongod"   34 seconds ago      Up 34 seconds       0.0.0.0:32771->27017/tcp   mongodb_node_0
[root@localhost tmp]# 

As you can see from the above example, we now have the container running, we could take a look by using the exec command on bash to be extra sure

[root@localhost tmp]#
[root@localhost tmp]# docker exec -it 154b9b82e431 /bin/bash
[root@154b9b82e431 /]# ps -ef|grep mongo
root         1     0  0 19:17 ?        00:00:01 /usr/bin/mongod
root        35    24  0 19:19 ?        00:00:00 grep mongo
[root@154b9b82e431 /]# 
[root@154b9b82e431 /]# exit
exit
[root@localhost tmp]#
[root@localhost tmp]# 

Building with a local file
For a local file based build we have placed the Dockerfile in /tmp/build_test/ we can use the same manner to build a docker image as we did for the github example however now we have to state the location of the Dockerfile on the local file system. You have to be sure you state the location and not the file itself to prevent an error as shown below:

[root@localhost /]# docker build --no-cache=true --rm -t localbuild/mongodb:latest /tmp/build_test/Dockerfile
unable to prepare context: context must be a directory: /tmp/build_test/Dockerfile
[root@localhost /]#

As you can see, calling the file will give an issue, if we call the location the build will happen without any issues:

[root@localhost /]# docker build --no-cache=true --rm -t localbuild/mongodb:latest /tmp/build_test/
Sending build context to Docker daemon 4.096 kB
Step 1 : FROM oraclelinux:6.9
 ---> 7a4a8c404142
Step 2 : MAINTAINER Johan Louwers 
 ---> Running in e7df0ce9533b
 ---> 6bd403a6a188
Removing intermediate container e7df0ce9533b
Step 3 : LABEL maintainer "louwersj@gmail.com"
 ---> Running in 5dbe161c94c3
 ---> c1ccf03f5aaa
Removing intermediate container 5dbe161c94c3
Step 4 : ARG VERSION
 ---> Running in 70f75e234ec3
 ---> 8789acea412c
Removing intermediate container 70f75e234ec3
Step 5 : ARG VCS_URL
 ---> Running in a6fcb917dab0
 ---> 5ec17fc93bd5
Removing intermediate container a6fcb917dab0
Step 6 : ARG VCS_REF
 ---> Running in 8581b2273afb
 ---> f38bd895e43e
Removing intermediate container 8581b2273afb
Step 7 : ARG BUILD_DATE
 ---> Running in 3b10331e2f96
.......................ETC ETC ETC...........

A check on the images available right now will show we now have a new image named localbuild/mongodb:latest as shown below:

[root@localhost /]# docker images
REPOSITORY           TAG                 IMAGE ID            CREATED              SIZE
localbuild/mongodb   latest              ac7816da045f        About a minute ago   251.4 MB
[root@localhost /]# 

Using a local file (which can be pulled from a local git repository) can be very valuable, especially if you need to mix the build of your image with artifacts from other builds. For example, if you want to include the war files from a maven build to provide a microservice from within a container concept. In case you want to build very specific containers who contain specific business functionality the use of the local file option is a possible route. 

Saturday, July 15, 2017

Oracle Linux - Docker unable to delete image is referenced in one or more repositories

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we do have the need to remove a number of unused images however are confronted with a reference dependency preventing the deletion of the unused image. The reason why this issue ocurred in this case is that we have two images present who are acutally the same image however are tagged in a different manner.

The reason this happens is the way the Oracle Linux images are tagged when they have been created and placed on the docker hub. We have one image which is tagged a 6.9 (the explicit version number) and one tagged as 6 which is a general reference to the highest version in 6 (which is 6.9). In effect the images 6.9 and 6 are exactly the same and are treated in the same manner.

Handling the version numbers as 6 and 6.9 is a convenient thing, especially in cases where a 6.10 version could be created (which is not the case for Oracle Linux 6). people would know that if they pulled 6 they would always have the latest version and if they wanted a specific version they could call 6.x (in our case 6.9)

Now, we have pulled 6.9 and 6 both to our Docker engine, during a cleanup we would like to remove both of them and we are faced with the below issue;

[root@localhost tmp]#
[root@localhost tmp]# docker images
REPOSITORY          TAG         IMAGE ID            CREATED             SIZE
oraclelinux         6           7a4a8c404142        3 weeks ago         170.9 MB
oraclelinux         6.9         7a4a8c404142        3 weeks ago         170.9 MB
[root@localhost tmp]#
[root@localhost tmp]#
[root@localhost tmp]# docker rmi 7a4a8c404142
Error response from daemon: conflict: unable to delete 7a4a8c404142 (must be forced) - image is referenced in one or more repositories
[root@localhost tmp]#
[root@localhost tmp]# 

As you can see the Docker Image ID's are the same, this is what is causing the issue as Docker references both images to each other. The way to resolve the issue is to force the remove image by using the -f flag in the command.

[root@localhost tmp]#
[root@localhost tmp]# docker rmi -f 7a4a8c404142
Untagged: oraclelinux:6
Untagged: oraclelinux:6.9
Untagged: oraclelinux@sha256:3501cce71958dab7f0486cd42753780cc2ff987e3f92bd084c95a53d52f4f1dc
Deleted: sha256:7a4a8c40414201cb671618dd99e8d327d4da4eba9d7991a86b191f4823925969
Deleted: sha256:d14f39f83be01eacab2aea7400a816a42ef7b8cdaa01beb8ff7102850248956d
[root@localhost tmp]#
[root@localhost tmp]# 

If you would now check the list of available images you will notice that 7a4a8c404142 has been gone, in fact, both 6 and 6.9 tags are gone who reference both to 7a4a8c404142. 

Tuesday, July 11, 2017

Oracle Linux - remove containers from Docker

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we do have the need to remove a number of stopped containers which are still present on our Docker engine. One way of doing application updates in a container manner is to use a mechanism to start containers with the newer version of your application and when started add them to the load balancing mechanism of your footprint and exclude the old version. As soon as the new version is receiving the requests you can stop de old containers. To ensure you can do a quick rollback it can be useful to have the old containers hanging around for some time. Other options are to do rolling upgrades in which you only have a partial update and only 50% of your containers are updated as well as other strategies for updating which become extremely easy when working with containers.

In this example we have a number of containers we want to remove a number of containers that are no longer running. We identify the containers by using the "docker ps" command as shown below in combination with a grep;

[root@localhost log]# docker ps -a|grep Exited
21600ca72b4e        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite3
388910430cee        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite2
d6c2e4d9431a        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite
[root@localhost log]# 

now we know that we want to remove the above mentioned containers, not only stop them, also remove them. This can be done with docker rm. In the below example we remove only a single container with the "docker rm" command;

[root@localhost log]# docker rm 21600ca72b4e
21600ca72b4e
[root@localhost log]#

if we now check the number of containers with the state exit we will notice that only two are left and we have removed one from our Docker engine.

[root@localhost log]# docker ps -a|grep Exited
388910430cee        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite2
d6c2e4d9431a        oracle/nosql        "java -jar lib/kvstor"   12 hours ago        Exited (143) 12 hours ago                                  kvlite
[root@localhost log]#

As shown in other examples, we can provide the docker command with a range of container ID's to take the same acton. The same is the case for the rm command which we can provide a number of container ID's and they will all be removed in a single action. This is shown in the example below;

[root@localhost log]# docker rm 388910430cee d6c2e4d9431a
388910430cee
d6c2e4d9431a
[root@localhost log]#

This results in our case in no results if we check for containers with the state Exited. This is shown in the example below:

[root@localhost log]# docker ps -a|grep Exited
[root@localhost log]#

Keeping container around on your Docker engine for some time when you do an application upgrade can be very good practice in case you need to be able to do an extreme fast rollback when things go wrong. However, keeping them around during an upgrade is no excuse of not doing housekeeping and keeping your IT footprint clean. This means that at one point in time you need to have a cleanup step in your rollout and deployment plan. The above example shows how to remove a container which is stopped. Other posts on this blog explain how to stop and again start containers when needed. Taking options like this in mind when creating a deployment and upgrade strategy can be vital to ensure a secure application upgrade with options to undertake a rollback extremely fast. 

Oracle Linux - start a stopped docker container

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we do have the need to start a stopped container again. Stopping a container will not remove the container and this means we can start it again if we need to do so. Having the option to stop containers and start them again is a great option especially when you do a rollout of a new version of an application landscape. Building a rollout strategy with a very fast way of rolling back to the original version can be supported by exactly this, the option to stop and start containers. In case your rollout is done correctly you can decide to remove the containers completely, having them around until you decide your rollout is fully complete can be a good practice.

When you execute a standard "docker ps" command you will get only the running containers, in our case we want to see all containers regardless the fact what the state is. For this we need to include the -a flag with the docker ps command as shown in the example below:

[root@localhost log]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                        PORTS               NAMES
c1db637d5612        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Exited (143) 30 minutes ago                       nosql_node_3
06fc415798e3        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Exited (143) 33 minutes ago                       nosql_node_2
bf2d698ebcb3        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Exited (143) 30 minutes ago                       nosql_node_1
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Exited (130) 6 minutes ago                        nosql_node_0
[root@localhost log]# 

In case we want to start a container again, in our example we want to start node 0 again we have to use the start command in combination with the container ID. This is shown in the example below:

[root@localhost log]# docker start 0a52831c65e8
0a52831c65e8
[root@localhost log]#

if we now execute a docker ps command (without the -a flag) we will see a list of running containers and we will notice that node 0 of our Oracle NoSQL cluster is back online again and ready to serve requests.

[root@localhost log]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up About a minute   5000-5001/tcp, 5010-5020/tcp   nosql_node_0
[root@localhost log]#

As with most commands you can provide multiple container ID's. This means that if we want to start the remaining nodes we can do that with a single command as shown below:

[root@localhost log]# docker start c1db637d5612 06fc415798e3 bf2d698ebcb3
c1db637d5612
06fc415798e3
bf2d698ebcb3
[root@localhost log]#

Checking what is running will show that all four nodes of our Oracle NoSQL cluster are running on our docker engine.

[root@localhost log]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
c1db637d5612        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 2 seconds        5000-5001/tcp, 5010-5020/tcp   nosql_node_3
06fc415798e3        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 2 seconds        5000-5001/tcp, 5010-5020/tcp   nosql_node_2
bf2d698ebcb3        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 2 seconds        5000-5001/tcp, 5010-5020/tcp   nosql_node_1
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up About a minute   5000-5001/tcp, 5010-5020/tcp   nosql_node_0
[root@localhost log]# 

Oracle Linux - finding your docker container IP

In our examples we are running a Docker engine on an Oracle Linux host which we use to explain how you can work with Docker and containers. In this example post we have a single Oracle NoSQL container running on our docker engine. When a Docker containers is started in a default manner it will get an internal IP which is accessible within docker by other containers. As part of your deployment model and scripting it is very well likely that you want to know the IP address assigned to a newly started conatiner without the need to go into the container. The need to get this information directly is a very likely scenario. To get more information from a host you can use the inspect command as part of the docker CLI. The inspect command provides you a JSON response containing a large set of information about a specific container.

The inspect command is used in combination with the container ID. This means we first have to get the container ID, one way of getting the container ID is using the "docker ps" command as shown below:

[root@localhost etc]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                          NAMES
0a52831c65e8        oracle/nosql        "java -jar lib/kvstor"   11 hours ago        Up 11 hours         5000-5001/tcp, 5010-5020/tcp   nosql_node_0
[root@localhost etc]# 

If we want to know more about the container running, and identified with container ID 0a52831c65e8 wen can execute the "docker inspect" command to retrieve the JSON response as shown in the example below:

[root@localhost etc]# docker inspect 0a52831c65e8
[
    {
        "Id": "0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f",
        "Created": "2017-07-10T19:43:49.205922395Z",
        "Path": "java",
        "Args": [
            "-jar",
            "lib/kvstore.jar",
            "kvlite",
            "-secure-config",
            "disable",
            "-root",
            "/kvroot"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 5364,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2017-07-10T19:43:49.636183576Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:247be918b211e6690ad33463336e502c260b1a35010102d93967bd49dc061e46",
        "ResolvConfPath": "/var/lib/docker/containers/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f/hostname",
        "HostsPath": "/var/lib/docker/containers/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f/hosts",
        "LogPath": "/var/lib/docker/containers/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f/0a52831c65e86727b248bc66c2cb5e4be037f85473a6c0645f8b81c5f2381e4f-json.log",
        "Name": "/nosql_node_0",
        "RestartCount": 0,
        "Driver": "devicemapper",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Name": "devicemapper",
            "Data": {
                "DeviceId": "34",
                "DeviceName": "docker-251:1-1835143-986c026ad2d7e69cae96df9d46f1d15c23c88103f7dc7d7756a8be2f50f474ea",
                "DeviceSize": "10737418240"
            }
        },
        "Mounts": [
            {
                "Name": "a37d7c33e0d78922160c4b411f13350e9692dddf26cdea794a5dd6f266723175",
                "Source": "/var/lib/docker/volumes/a37d7c33e0d78922160c4b411f13350e9692dddf26cdea794a5dd6f266723175/_data",
                "Destination": "/kvroot",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "0a52831c65e8",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "5000/tcp": {},
                "5001/tcp": {},
                "5010/tcp": {},
                "5011/tcp": {},
                "5012/tcp": {},
                "5013/tcp": {},
                "5014/tcp": {},
                "5015/tcp": {},
                "5016/tcp": {},
                "5017/tcp": {},
                "5018/tcp": {},
                "5019/tcp": {},
                "5020/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "JAVA_HOME=/usr/lib/jvm/java-openjdk",
                "VERSION=4.3.11",
                "KVHOME=/kv-4.3.11",
                "PACKAGE=kv-ce",
                "EXTENSION=zip",
                "BASE_URL=http://download.oracle.com/otn-pub/otn_software/nosql-database/",
                "_JAVA_OPTIONS=-Djava.security.egd=file:/dev/./urandom"
            ],
            "Cmd": [
                "java",
                "-jar",
                "lib/kvstore.jar",
                "kvlite",
                "-secure-config",
                "disable",
                "-root",
                "/kvroot"
            ],
            "Image": "oracle/nosql",
            "Volumes": {
                "/kvroot": {}
            },
            "WorkingDir": "/kv-4.3.11",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "3f3505304a821da97a668ae622a09738cf8c88768e77f5c1995154f431461700",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "5000/tcp": null,
                "5001/tcp": null,
                "5010/tcp": null,
                "5011/tcp": null,
                "5012/tcp": null,
                "5013/tcp": null,
                "5014/tcp": null,
                "5015/tcp": null,
                "5016/tcp": null,
                "5017/tcp": null,
                "5018/tcp": null,
                "5019/tcp": null,
                "5020/tcp": null
            },
            "SandboxKey": "/var/run/docker/netns/3f3505304a82",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "28368b7c9058e300d08cc3a3453568cb903504ed33b4c61c6349963ad190160f",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "fc7e96b764adee7ae2a6f061b766c61ea82ec7754faa64f2483889fa0a5a3a5f",
                    "EndpointID": "28368b7c9058e300d08cc3a3453568cb903504ed33b4c61c6349963ad190160f",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02"
                }
            }
        }
    }
]
[root@localhost etc]#

As you can see above the "docker inspect" command provides a very rich set of information whcih can be used for a variaty of things. However, in our example case we only wanted to know the IP address which has been assigned to the container. This means we will have to extract the IP from the JSON response. The below comman will help you in extracting just that from the larger set of information:

[root@localhost etc]# docker inspect 0a52831c65e8 | grep IPAddress | cut -d '"' -f 4| sort -r | head -1
172.17.0.2
[root@localhost etc]#

The above example could be the starting point of a bash function in a wider script which allows you to simply call a function in combination with a variable for the container ID and return the IP information you need. Knowing and understanding how to quickly extract the IP information from the inspect command can help you in building scripting arround docker.