Thursday, December 29, 2016

Oracle Linux - Logging and analyzing your network connections

Whenever you run a relatively larger network with multiple hosts connecting to each other you will have to think about network monitoring and optimization as well as security. Some organisations insist on having a very strict firewall policy by implementing network firewalls as well as local firewalls on operating systems. Having a strict firewall regime in place and ensuring network traffic between compartments will travel via a network firewall and systems within a compartment can only communicate if allowed by the local firewall on the operating system is a in general a good thing from a security point of view.

From an administrative point of view things become a lot more difficult. Due to this a lot of organisations are will not implement local firewalls on machines and will allow systems within a certain network compartment to freely talk to each other.

Even though it might take a lot more effort and thinking it is a good practice to also ensure local firewalls are in place to tighten security. In case you intend to implement this in an existing environment it might be hard to do so based upon architecture documents. Organically grown networks and applications might have established connections over time that are not captured in the enterprise architecture.

To be able to find out what the actual network usage is and how connections actually are made you will have to start collecting data on network connections. Even in the case you are not trying to put a more strict firewall regime in place it is good to understand who talks to who. Being able to have a real-time insight in connections will improve awareness and provides the options to improve security, remove single point of failures and improve the level of service.

Storing your data for analysis
Whenever you start collecting data for analysis you will have to store it. Some good options to store this type of data are for example;
Whenever selecting a location to store your data you will have to consider that you will have to have an option to easily extract the data again and use it for analysis. As an addition to the above you can also store the data on, for example, HDFS and use map reduce based solutions to extract the meaning out of it. However, in case you will need a true big data solution to analyse all data you are most likely not talking about a "relatively larger network" however you are talking about a large network.

Collecting the data
regardless of what technology you use to store the information centrally, being it Beats or Splunk or any other technology, you will have to capture the connection data first. When you are using iptables on your Oracle Linux instance you can use the standard iptables firewall from Linux to do the work. If you use firewalld you also have this option however it is not covered in this post.

What needs to be done is to put the firewall in a mode where it logs all new connections. That means inbound as well as outbound connections. To do so you can use the below commands to instruct iptables to log every new connection to the standard log location.

iptables -I OUTPUT -m state --state NEW --protocol tcp -j LOG --log-prefix "New Connection: "
iptables -I INPUT -m state --state NEW --protocol tcp -j LOG --log-prefix "New Connection: "

Understanding the records
Adding the two rules to iptables as shown above will make sure that new connections are logged. You will have to implement a way to collect the log records from all systems in a single location and you will have to find a way to analyze the data to uncover the connection patterns between systems. However, before you can do this it is of importance to understand the log records itself.

he below two examples show an inbound and an outbound connection in the log files where a connection was made between xft83.company.com (10.103.11.83) and xft82.company.com (10.103.11.82).

Dec 27 16:43:13 xft83.company.com kernel: New Connection: IN=eth0 OUT= MAC=00:21:f6:01:00:02:
00:21:f6:01:00:01:08:00 SRC=10.103.11.82 DST=10.103.11.83 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=6662
 DF PROTO=TCP SPT=50525 DPT=5044 WINDOW=14600 RES=0x00 SYN URGP=0

Dec 27 16:45:44 xft83.company.com kernel: New Connection: IN= OUT=eth0 SRC=10.103.11.83 DST=10.103.
11.82 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=26870 DF PROTO=TCP SPT=64899 DPT=9200 WINDOW=14600 RES=0
x00 SYN URGP=0

As you can see the messages differ a little from each other each other however hold a number of the same pieces of information. The main differnce in the outbound and inbound messages is that MAC is added to all inbound log records to provide information on the MAC address of the sending NIC who is initiating the connection.

  • IN : stating the NIC used for the inbound connection. This will only be filled in case of an inbound connection.
  • OUT : Stating the NIC used for the outbound connection. This will only be filled in case of an outbound connection.
  • MAC : Stating the MAC addressed of the NIC used by the sending party. This is only filled in case of an inbound connection
  • SRC : source IP address of the sending party who innitiate the connection
  • DST :  Destination IP address of the receiving party for whom the connection is intended
  • LEN : Packet length
  • TOS : Type of Service (for packet prioritization)
  • PREC : Precedent bits
  • TTL : Time to Live
  • ID : Packet identifier
  • DF : don't fragment (DF) bit
  • PROTO : The protocol used. In our example we filter only for TCP protocol based connections
  • SPT : source port number on the source IP address of the sending party who innitiate the connection
  • DPT : Destination port number on the Destination IP address of the  receiving party for whom the connection is intended
  • WINDOW : Size of TCP window
  • RES : Reserved bits
  • SYN : SYNchronize packet for the TCP 3-Way Handshake
  • URGP : Urgent packet

Not listed above is for example are ACK and SYN-ACK who form together with SYN the TCP 3-Way Handshake. During the 3 way handshake the following will happen:

  • Host A sends a TCP SYNchronize packet to Host B
  • Host B receives A's SYN
  • Host B sends a SYNchronize-ACKnowledgement
  • Host A receives B's SYN-ACK
  • Host A sends ACKnowledge
  • Host B receives ACK.
  • TCP socket connection is ESTABLISHED.

This means that if we are looking for the packets send by the host that initiate a new TCP socket connection we have to look for packets with SYN.

Wrapping it together 
By capturing this information from all servers and storing it in a central location you will have the ability to show how network traffic is flowing in your network. The result might be totally different than what is outlined in the architecture documents, especially in cases where the IT footprint has grown over the years. Having this insight helps in gaining understanding and showcasing potential risks and issues with the way your IT footprint is constructed.

This insight also helps in planning changes, introducing new functionality and improvement and helps in implementing a more strict security regime while not hindering ongoing operations. 

Wednesday, December 28, 2016

Oracle Cloud - Governance Risk and Compliance Framework based deployments

Regulations, compliance rules, guidelines and security directives should all be part of the overall governance risk and compliance framework implemented within your company. Governance risk and compliance frameworks are developed and implemented to bring structure in how companies are organized to protect against risks and how to react in case of an incident.

when developing a governance, risk and compliance framework to be implemented within a large organisation companies often look at market best practice standards, regulatory requirements and internal standards. Within a wider model it will be required for organisation to look strategic risks, operational risks and tactical risks to ensure the entire organisation is covered under the Governance risk and compliance framework.


Information systems and data - tier 3
As can be seen in the above image, the high level view showing a government, a popular risk and compliance framework outline consists out of 3 tiers. Only tier 3 includes information systems and data as a primary focus. A framework and its implementation is only complete and only makes sense if you ensure you will focus on all 3 tiers and not only on one or two of them.

Even thought this post will focus on tier 3, information systems and data, in an overall governance , risk and compliance framework all the layers should be covered. Having stated that, all the 3 tiers will influence each other and the end-to-end model cannot be created without having cross functions and overlap.

Implementation by standardization and Enterprise Architecture
When looking at tier 3 specific one of the things that will become obvious rather quickly is that creating and implementing a governance risk and compliance framework will require standardization.

To be able to build a good and workable governance, risk and compliance framework you will have to standardize and limit the number of technologies as much as possible. Building standardized building blocks and adopting them in your enterprise architecture repository for your organisation and ensuring they are complimented with standard implementation and security rules is a good rule by default. Ensuring the standardized solution building blocks and  the governance risk and compliance framework are complementing to each other is a vital corner stone for being successful in tier 3 implementations.

The below diagram shows a part of TOGAF relevant to building a governance, risk and compliance framework enabled Enterprise Architecture.


When developing a governance, risk and compliance framework you will have to ensure that this framework will be adopted in the Enterprise Architecture framework as a standard. This means that the framework will have to be implemented in the standards information base. By doing so it will ensure that it is included in the architecture landscape and as a result will end up in the solution building blocks.

The "Standards Information Base" will hold standards on architecture, standards on configuration as well as coding and development standards. This will also hold all standards coming from the governance, risk and compliance framework to ensure that this is embedded in the foundation of the architecture.

It is of vital importance to ensure that not only the standards coming from the governance, risk and compliance framework are included in the "Standards Information Base". It is equally important to ensure that they are applied and used and that an architecture compliance review process is in place to ensure this.

Having the standards derived from the governance, risk and compliance framework embedded in the enterprise architecture and ensuring that it is applied on an architecture level and used in the right manner will help enforcing the implementation in tier 3.

Compliance assessment
Having a governance, risk and compliance framework in place, embedding it in the "Standards Information Base" of your Enterprise Architecture Repository and ensuring with a architecture compliance review process that the standards are included in the resulting architectures and solution building blocks in only a part of the end-to-end solution.

Ensuring that your architecture is in line with the requirements stated in the governance, risk and compliance framework is not a guarantee that it is implemented in this manner. And when it is implemented in compliance with the standards it is not a guarantee that it will remain that way during operations.

What is required to ensure a correct level of compliance with he standards is a constant monitoring of the current deployments and to what level they are compliant. Solutions like Puppet can be used up to a certain level to complete this task and report the level of deviation from the standard requirements however solutions like Puppet (and others) are not designed for this specific purpose and are only able to do this task up to a certain level.

Oracle provides a fully build for purpose solution as part of the Oracle Management Cloud Service. The Oracle Compliance Cloud Service is a software-as-a service solution that enables the IT and Business Compliance function to assess and score industry standard benchmarks, REST-based cloud resources and your own custom rules. With the Oracle compliance Cloud Service you can score, assign and remediate compliance violations both on premise and in the cloud.



The Oracle Compliance Cloud Service will allow you to monitor systems and applications deployed in the Oracle cloud in other public clouds, in your local datacenter and in your private cloud. Providing a constant monitoring and reporting to enable you to have a realtime insight into the level of compliance. This can be against the standards defined by your own organisation or against industry and regulatory standards.

Having the ability to constant have a realtime insight and define automatic actions in case a check fails ensures that you gain more control over the actual implementation of the governance, risk and compliance framework in tier 3. having the option to do realtime and constant assessments will uncover situations that might lead to possible issues directly and empowers IT to ensure security, reliability and compliance at all times. 

Tuesday, December 27, 2016

Oracle Linux - Peer cert cannot be verified or peer cert invalid

Whenever trying to update a package or install a package with Oracle Linux using YUM you will connect to a local or a remote YUM server which will serve you a list of packages available. By default and based upon good practice this connection will be encrypted. In some cases however a secure connection cannot be made. An example of such a case is when you need to rely on a proxy to the outside world and the proxy is not configured in the right manner to allow you to setup a correct certificate based connection.

In those cases you might end with an error as shown below:
: [Errno 14] Peer cert cannot be verified or peer cert invalid

A couple of options are available to resolve this issue. The most simple way to resolve the issue is to enforce YUM to simply not verify the SSL connection between the server and the YUM repository. To set this as a global setting to ensure you resolve error number 14 (as shown above) you have to edit the configuration file /etc/yum.conf

In /etc/yum.conf you have to ensure that sslverify is set to false. This means the below setting should be changed from true to false;

sslverify=false

Thursday, December 22, 2016

Oracle cloud – code, deploy, test, destroy (and repeat)

Traditionally in old fashion environments it was hard to quickly build and test your code in any other way than on the system you developed the code on. Traditionally developers would develop code on a develop system, do the unit testing on this system and after that move the code to a test system. The next phase would be integration testing, user acceptance testing and production stage. Or any other form which you company adopted.

However, when developing code you do want to quickly deploy code, test the things you created, find the bugs and mistakes and repeat the process. Up until recent it was quite hard to build your code quickly and deploy it on a test system which only was used for this specific test and could be discarded afterwards.

Having the option to do so, quickly and often test without the burden of requesting a infrastructure team to prepare a new fresh test system is a big benefit for developers and ensures that the quality of code and the speed of development are increasing. The below image shows a more modern approach to DevOps and a more modern way of developing code.

DevOps Flow in the Oracle Cloud

Using an integrated environment
The above image outlines multiple streams for unit testing, integration testing and production deployments in a DevOps environment. When using such a model developers will be able to push code to a central version control system and state which version of the code has to be build (compiled) automatically, deployed on a fresh provisioned and environment and (automatically) tested.

This model only works in a highly automated cloud environment. To achieve this you will have to ensure that all components are driven by automated processes, fully self service, where DevOps developers can start a build process and the deployment of new environments and build code deployments. The solution needs to be a fully integrated environment where all nodes (deployed server builds) are fully stateless and code is pulled from the source repository.

Using an integrated environment in this manner will enable you do automatically build your code, deploy in on servers, and test it in an automatic manner. The same can be applied in exact the same manner for the integration testing and for the deployment to production phase. Making use of exactly the same routines, source code and by doing so ensuring the production state will be exactly what is tested without the riks of human errors.

At the same moment, the entire testing and deploying steps will only take a fraction of the time of what it might take when done manually.

Automated code building / build automation
The process of build automation in this DevOps based deployment cycle will take care of downloading a specific branch of the source repository tree and build it into a "product" that can be deployed on a node. Depending on the language you use to develop your code in, the way you automate other parts of the "stream" and other aspects you will have to select a certain build automation tool.

Hudson and Jenkins are currently popular build automation tools used for continuous integration. However, depending on your situation other tools might very well be applicable. As an example, you might want to use BuildBot or Team Foundation Server. When using the Oracle Developer Cloud Service you will be able to use a hosted version of Hudson

Deployment automation
When using the Oracle Cloud you will have the desire to ensure that the automatically build code will be deployed on one or multiple systems. In this case "systems" can refer to multiple type of Oracle Public Cloud Solutions. As an example, it can be a docker container in the Oracle Container Cloud Service, it can be an application server running on an Oracle Linux instance in the Oracle Compute Cloud service. However, it can very well be PL/SQL code deployed in a database in the Oracle Database Cloud Service.

Ensuring you "pack" all the different targets where the code needs to be deployed, the configuration of the individual nodes and all the configuration to bind them together. A growing number of tools and platforms are available to asist in this task, from complex solutions to the more easy to maintain and understand solutions like for example RunDeck.

Binding it all together
When you intend to build a solution like shown above and make use of the benefits of a fully automated end-to-end continues deployment and integration you will have to build a vision and direction and arrange the according tools in a DevOps tool Chain. When developing this vision and the underlying technology the biggest challenge is to ensure that everything can be integrated into one single fully automated deployment orchestration.

Oracle is supporting in this with a set of tooling within the Oracle Developer Cloud service and with a rich set of API driven services. However, to ensure a totally fitted deployment automation solution it is very likely you will select a number of tools that are not by default in the Oracle Cloud Services however very capable of running on the Oracle Cloud.

Monday, December 19, 2016

Oracle Cloud – Changing the tool chain for DevOps

Organisations used to make use of a waterfall based strategy with a clear split between development and operations. A model in which developers where tasked to develop new functionality and improve functionality based upon change requests in a waterfall based strategy. Operations departments where tasked with running the production systems with the code developed by the development teams without a clear feedback channel to the developers. With adopting new ways of working and with the rise of DevOps a change is happening in companies of all sizes.

Development and operations departments are merged together and form DevOps teams more focused around a set of products from both a development as well as a operational run point of view opposed to being focused on development only or operational support only.

The general view and the general outcome of this model is that by merging operational and development teams into one team responsible for the entire lifecycle of a product the overall technical quality of the solution as well as the quality of the operational use improves.

Transition challenges
Traditional organized teams making the transition from a split operational and development model to a DevOps model will face a number of challenges. One of the most challenging parts of this transition will be the culture shift and the change of responsibilities and expectations for each individual in the team. People tasked with operational support tasks will now be required to also work on improving the solution and extending functionality. People who are used to only work on developing new code are now also expected to support the solution in a run phase. 

Another challenging part of the transition will be the adoption of a totally new way of working and a new toolset. When working with the Oracle Cloud in a DevOps way you will have the option to use all the tools out of the standard DevOps tool chain. 

Changing the tool chain
When moving from more traditional way op IT operations to a DevOps operational model you will see that the tool chain is changing. The interesting part of the DevOps chain is that most of the common tools in the DevOps tool chain are open and most of them are open source.  Secondly, there's no single, one-size-fits-all DevOps tool. Rather, the most effective results come from standardizing on a tool chain that maps directly to best practices such as version control, peer review and continuous delivery all built on a foundation of managing infrastructure as code and continues delivery and deployment.

There is a large set of tools which are commonly used in different setups of DevOps both in situations of conventional IT, private cloud, public cloud and hybrid cloud. The combination of tools and how they are used in a specific DevOps footprint is primarily driven by the type of applications that are maintained, the level of expertise of the DevOps team and the level of integration with already present tools in the tool chain.

Oracle Cloud DevOps ToolChain

DevOps tool chain in the Oracle Cloud
A growing part of the DevOps tool chain is by default available in the Oracle Public Cloud, enabling your DevOps teams to start using it directly from moment one. When using the Oracle Public Cloud and consuming for example PaaS and IaaS services and building a DevOps Tool Chain around the Oracle Public Cloud and other public and private clouds it is wise to look into the options provided by Oracle. 

Oracle Cloud DevOps ToolChain

As can be seen in the above image the Oracle Developer Cloud Service already provides a large set of DevOps tools out of the box. In addition to this the Oracle Computer Cloud Service will provide you IaaS service and provides you the ability to run Oracle Linux instances which you can use to run whatever DevOps tool you need and integrate that with the tools that are already available in the Oracle Developer Cloud Service.

Monitoring and Orchestration 
As a large part of DevOps revolves not only around developing code and deploying code however also includes constant monitoring of your environment and the orchestration of the end-to-end flow a large part of the DevOps team time is spend on this. Within the open and opensource tool chain a lot of different tools can be found who are all capable of doing so. 

DevOps teams can decide to make use of those tools as well as that the can make use of services provided from within the Oracle Cloud. Tools provided by the Oracle cloud for this can be used for DevOps taks in the Oracle cloud as well as in other clouds or on-premise deployed solutions. 

As an example, Oracle Orchestration Cloud Service can take a large part of the tasks for the end-to-end orchestration within the DevOps model. Next to this, within the Management portfolio of the Oracle Public Cloud portfolio tools for application monitoring, infrastructure monitoring and log analytics can be found. Tasks also commonly being given to solutions like the Elastic Search stack or to Splunk

The Oracle Public Cloud picture
If we combine the above in on picture we will see that we can run the entire DevOps tool chain in the Oracle cloud. This includes the needed feedback from continuous monitoring by leveraging the products from the Oracle Cloud monitoring portfolio. 


As you can see, Oracle Cloud is not the only target when running the DevOps tool chain in the Oracle Cloud, you can use this as a central tool location to develop and operate solutions deployed at any location. This can be the Oracle Public Cloud, a private cloud or another non-Oracle Public Cloud. 

Sunday, December 11, 2016

Oracle Linux - transform CSV into JSON with bash

Like it or not, a large number of outputs created by systems is still in CSV, comma separated value, file format. The amount of information that is created, and needs processing, that is represented in CSV format is large. And it is good to understand how you could script against CSV files in your Oracle Linux bash scripts. to use the information or to transform this into other formats. As an example we use the below which is a section of a output file generated by an application that logs the use of doors within a building.

[root@localhost tmp]# cat example.csv
100231,AUTHORIZED,11-DEC-2016,13:12:15,IN,USED,F2D001
100231,AUTHORIZED,11-DEC-2016.13:14:01,IN,USED,F2D023
100231,AUTHORIZED,11-DEC-2016,13:15:23,IN,TIMEOUT,F2D024
100231,AUTHORIZED,11-DEC-2016,13:15:59,IN,USED,F2D024
100562,AUTHORIZED,11-DEC-2016,13:16:01,IN,USED,F1D001
100562,AUTHORIZED,11-DEC-2016,13:16:56,IN,USED,F1D003
100562,AUTHORIZED,11-DEC-2016,13:20:12,OUT,USED,F1D003
100562,AUTHORIZED,11-DEC-2016,13:20:58,IN,USED,F1D004
100231,AUTHORIZED,11-DEC-2016,13:20:59,OUT,USED,F2D024
[root@localhost tmp]#

As you can see from the above data we have some information in a per-line format using a CSV format to seperate the data. In this example the fields have the following meaning:

  • The ID of the access card used
  • The status of the authorization request by the card for a certain door
  • The date the authorization request was made
  • The time the authorization request was made
  • The direction of the revolving door the request was made for
  • The usage status, this can be USED or can be TIMEOUT in case the door was not used
  • The ID for the specific revolving door

The amount of things you might want to do from a data mining or security point of view are endless, however, having a CSV file on the file system of your Oracle Linux server is not making it useful directly. You will have to do something with it. To show you how you can use bash scripting to understand the CSV file itself is shown in the below example script;

#!/bin/bash
INPUT=/tmp/example.csv
OLDIFS=$IFS
IFS=,
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
while read cardid checkstatus checkdate checktime doordirection doorstatus doorid
do
        echo "card used : $cardid"
        echo "check outcome : $checkstatus"
        echo "date : $checkdate"
        echo "time : $checktime"
        echo "direction : $doordirection"
        echo "usage : $doorstatus"
        echo "door used : $doorid"
        echo "----------------------"
done < $INPUT
IFS=$OLDIFS

As you can see in the above example we use the IFS variable to read and separate the values in the CSV file and place them in their own respective variables. The $IFS variable is a special shell variable and stands for Internal Field Separator. The Internal Field Separator (IFS)  is used for word splitting after expansion and to split lines into words with the read builtin command. Whenever trying to split lines into words you will have to look into the $IFS variable and how to use this.

The above example is quite simple and just prints the CSV file in a different way to the screen. More interesting is how you could transform a CSV file into something else, for example a JSON file. In the below example we will transform the CSV file into a correctly formatted JSON file.

#!/bin/bash
# NAME:
#   CSV2JSON.sh
#
# DESC:
#  Example script on how to convert a .CSV file to a .JSON file. The
#  code has been tested on Oracle Linux, expected to run on other
#  Linux distributions as well.
#
# LOG:
# VERSION---DATE--------NAME-------------COMMENT
# 0.1       11DEC2016   Johan Louwers    Initial upload to github.com
#
# LICENSE:
# Copyright (C) 2016  Johan Louwers
#
# This code is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This code is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this code; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA.
# *
# */

 inputFile=/tmp/example.csv
 OLDIFS=$IFS


 IFS=,
  [ ! -f $inputFile ] && { echo "$inputFile file not found"; exit 99; }

# writing the "header" section of the JSON file to ensure we have a
# good start and the JSON file will be able to work with multiple
# lines from the CSV file in a JSON array.
 echo "
  {
   \"checklog\":
   ["

# ensuring we have the number of lines from the input file as we
# have to ensure that the last part of the array is closed in a
# manner that no more information will follow. (not closing the
# the section with "}," however closing with "}" instead to
# prevent incorrect JSON formats. We will use a if check in the
# loop later to ensure this is written correctly.
 csvLength=`cat $inputFile | wc -l`
 csvDepth=1

 while read cardid checkstatus checkdate checktime doordirection doorstatus doorid
  do
     echo -e "   {
      \"CARDCHECK\" :
       {
        \"CARDID\" : \"$cardid\",
        \"CHECKSTATUS\" : \"$checkstatus\",
        \"CHECKDATE\" : \"$checkdate\",
        \"CHECKTIME\" : \"$checktime\",
        \"DIRECTION\" : \"$doordirection\",
        \"DOORSTATUS\" : \"$doorstatus\",
        \"DOORID\" : \"$doorid\"
       }"
     if [ "$csvDepth" -lt "$csvLength" ];
      then
        echo -e "     },"
      else
        echo -e "     }"
     fi
   csvDepth=$(($csvDepth+1))
  done < $inputFile

# writing the "footer" section of the JSON file to ensure we do
# close the JSON file properly and in accordance to the required
# JSON formating.
 echo "   ]
  }"

 IFS=$OLDIFS

As you can see, and test, you will now have a valid JSON output format. As JSON is much more the standard at this moment than CSV you can more easily use this JSON format in the next step of the process. You could for example use this to send it to a REST API endpoint for storing in a database, you can use it in a direct manner to upload it to Elasticsearch, or......

However, as stated, the entire way of working is using the $IFS variable available to you as a integrated part of the Linux shell. The above scv2json.sh code is uploaded to github and in case of any changes they will be made only to github.com, feel free to fork it and use it for your own projects. 

Oracle Linux – short tip #4 - grep invert-match

When working and scripting with Oracle Linux you will use grep at one point in time. Grep searches the input for lines containing a match to the given pattern. By default, grep prints the matching lines.Whenever trying to find something in a file you might want to use a combination of cat and pipe the output of cat to grep to show you only the matching lines. And, as stated grep will show you only the lines that match the given pattern.

Lets assume we have the below file with some random data in it as an example;

[root@localhost tmp]# cat example.txt
#this is a file with some example content

line 1 - abc
line 2 - def

line 3 - ghi
jkl
mno

pqr

#and this is the end of the file
[root@localhost tmp]#

A simple case would be to show you some specific content, for example "abc" and print this to the screen which can be done using the below command;

[root@localhost tmp]# cat example.txt | grep abc
line 1 - abc
[root@localhost tmp]#

Or we could print all lines containing "line" as shown below;

[root@localhost tmp]# cat example.txt | grep line
line 1 - abc
line 2 - def
line 3 - ghi
[root@localhost tmp]#

However, this is only an example showing you how to show lines that match. Something less commonly used is a invert match, showing you all the lines that do NOT match the pattern defined. The invert match can be done using the -v option in grep.

As an example we might want to remove all the lines starting with "#" from the output. This might be very useful when for example trying to quickly read a configuration file which contains a lot of examples. For example, if you try to read the configuration file from the Apache webserver most of the httpd.conf file is examples and comments you might not be interested in and you would like to remove that from the output to quickly see what the actual active configuration is. Below is an example where we use the invert match option from grep to remove those lines from the output;

[root@localhost tmp]# cat example.txt | grep -v '^#'

line 1 - abc
line 2 - def

line 3 - ghi
jkl
mno

pqr

[root@localhost tmp]#

Even though this is already helping a bit, we might also want to remove the empty lines, to make the example more readible we show it in a couple of steps. First step is the cat, second step is the invert match on lines starting with "#" and the third step ie the invert match on empty lines;

[root@localhost tmp]# cat example.txt | grep -v '^#'|  grep -v '^$'
line 1 - abc
line 2 - def
line 3 - ghi
jkl
mno
pqr
[root@localhost tmp]#

As you might already be well up to speed in using grep and using it to match all kinds of output this means that the learning curve for using invert match is nearly zero. It is just a matter of using the -v option in grep to exclude things instead of using the include match option which is the common behavior of grep. the grep command is standard available in Oracle Linux and in almost every other Linux distribution. 

Oracle Linux – short tip #3 – showing a directory tree

When navigating the Oracle Linux file system is sometimes is more comfortable as a human to see a directory tree structure in one view opposed to a "flat" view. People using a graphical user interface under Linux or Windows are commonly used to see a tree view of the directories they are navigating. Having the tree view makes sense and is more easy to read in some cases. By default Oracle Linux is not providing a feature for this on the command line, this is because the basic installation is is baed upon a minimalism installation for all good reasons. However the tree option is available and can be installed using yum and the standard Oracle Linux YUM repository.

Installation can be done with;
yum install tree

As soon as tree is installed you can use this option to show you a tree representation of the file system. For example, if you would use ls on a specific directory you would see something like in the example below;

[root@localhost input]# ls
by-id  by-path  event0  event1  event2  event3  event4  event5  event6  mice  mouse0  mouse1
[root@localhost input]#

If we execute the tree command in the same location we will see a more deep dive tree representation of the same directory and all underlying directories which makes it much more quicker to understand the layout of the directory structure and navigate it. In the example below you see the representation made available via the tree command.

[root@localhost input]# tree
.
├── by-id
│   ├── usb-VirtualBox_USB_Tablet-event-joystick -> ../event5
│   └── usb-VirtualBox_USB_Tablet-joystick -> ../mouse1
├── by-path
│   ├── pci-0000:00:06.0-usb-0:1:1.0-event-joystick -> ../event5
│   ├── pci-0000:00:06.0-usb-0:1:1.0-joystick -> ../mouse1
│   ├── platform-i8042-serio-0-event-kbd -> ../event2
│   ├── platform-i8042-serio-1-event-mouse -> ../event3
│   ├── platform-i8042-serio-1-mouse -> ../mouse0
│   └── platform-pcspkr-event-spkr -> ../event6
├── event0
├── event1
├── event2
├── event3
├── event4
├── event5
├── event6
├── mice
├── mouse0
└── mouse1

As you can see, using tree is making it much faster to understand the layout of the directory structure opposed to using for example ls and diving manually into the different directories while using Oracle Linux.

Saturday, December 10, 2016

Oracle Linux – finding the executable of a process

One of the important things when using and administering an Oracle Linux instance (or any other distribution for that matter) is to understand what is going on within your system. One of the things to understand is what is running on the Linux instance. Even more important is to ensure that you are constantly aware and you are in full control of what is running so you can detect things in case they are running and should not be running so you can detect anomalies. However, before you can look into detecting anomalies in running processes you need to have an understanding of how to look at what is running on your system.

Commonly when people like to know what is running on a Oracle Linux system they use the top command or they use the ps command. the ps command is to report a snapshot of the current processes on your system. An example of the ps command is shown below taken from one of my temporary test servers;

[root@localhost 2853]# ps -ef | grep root
root         1     0  0 Nov26 ?        00:00:01 /sbin/init
root         2     0  0 Nov26 ?        00:00:00 [kthreadd]
root         3     2  0 Nov26 ?        00:00:02 [ksoftirqd/0]
root         5     2  0 Nov26 ?        00:00:00 [kworker/0:0H]
root         6     2  0 Nov26 ?        00:00:00 [kworker/u:0]
root         7     2  0 Nov26 ?        00:00:00 [kworker/u:0H]
root         8     2  0 Nov26 ?        00:00:00 [migration/0]
root         9     2  0 Nov26 ?        00:00:00 [rcu_bh]
root        10     2  0 Nov26 ?        00:00:35 [rcu_sched]
root        11     2  0 Nov26 ?        00:00:05 [watchdog/0]
root        12     2  0 Nov26 ?        00:00:00 [cpuset]
root        13     2  0 Nov26 ?        00:00:00 [khelper]
root        14     2  0 Nov26 ?        00:00:00 [kdevtmpfs]
root        15     2  0 Nov26 ?        00:00:00 [netns]
root        16     2  0 Nov26 ?        00:00:00 [bdi-default]
root        17     2  0 Nov26 ?        00:00:00 [kintegrityd]
root        18     2  0 Nov26 ?        00:00:00 [crypto]
root        19     2  0 Nov26 ?        00:00:00 [kblockd]
root        20     2  0 Nov26 ?        00:00:00 [ata_sff]
root        21     2  0 Nov26 ?        00:00:00 [khubd]
root        22     2  0 Nov26 ?        00:00:00 [md]
root        24     2  0 Nov26 ?        00:00:00 [khungtaskd]
root        25     2  0 Nov26 ?        00:00:05 [kswapd0]
root        26     2  0 Nov26 ?        00:00:00 [ksmd]
root        27     2  0 Nov26 ?        00:00:00 [fsnotify_mark]
root        38     2  0 Nov26 ?        00:00:00 [kthrotld]
root        39     2  0 Nov26 ?        00:00:00 [kworker/u:1]
root        40     2  0 Nov26 ?        00:00:00 [kpsmoused]
root        41     2  0 Nov26 ?        00:00:00 [deferwq]
root       187     2  0 Nov26 ?        00:00:00 [scsi_eh_0]
root       190     2  0 Nov26 ?        00:00:00 [scsi_eh_1]
root       252     2  0 Nov26 ?        00:00:06 [kworker/0:1H]
root       305     2  0 Nov26 ?        00:00:00 [kdmflush]
root       307     2  0 Nov26 ?        00:00:00 [kdmflush]
root       373     2  0 Nov26 ?        00:00:46 [jbd2/dm-0-8]
root       374     2  0 Nov26 ?        00:00:00 [ext4-dio-unwrit]
root       474     1  0 Nov26 ?        00:00:00 /sbin/udevd -d
root      1845     2  0 Nov26 ?        00:00:00 [jbd2/sda1-8]
root      1846     2  0 Nov26 ?        00:00:00 [ext4-dio-unwrit]
root      1893     2  0 Nov26 ?        00:00:00 [kauditd]
root      2100     2  0 Nov26 ?        00:01:08 [flush-252:0]
root      2282     1  0 Nov26 ?        00:00:00 auditd
root      2316     1  0 Nov26 ?        00:00:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
root      2445     1  0 Nov26 ?        00:00:00 cupsd -C /etc/cups/cupsd.conf
root      2477     1  0 Nov26 ?        00:00:00 /usr/sbin/acpid
root      2490  2489  0 Nov26 ?        00:00:00 hald-runner
root      2556     1  0 Nov26 ?        00:00:08 automount --pid-file /var/run/autofs.pid
root      2664     1  0 Nov26 ?        00:00:00 /usr/sbin/mcelog --daemon
root      2686     1  0 Nov26 ?        00:00:00 /usr/sbin/sshd
root      2812     1  0 Nov26 ?        00:00:02 /usr/libexec/postfix/master
root      2841     1  0 Nov26 ?        00:00:00 /usr/sbin/abrtd
root      2853     1  0 Nov26 ?        00:00:04 crond
root      2868     1  0 Nov26 ?        00:00:00 /usr/sbin/atd
root      2935     1  0 Nov26 ?        00:00:00 /usr/sbin/certmonger -S -p /var/run/certmonger.pid
root      2981     1  0 Nov26 tty1     00:00:00 /sbin/mingetty /dev/tty1
root      2983     1  0 Nov26 tty2     00:00:00 /sbin/mingetty /dev/tty2
root      2985     1  0 Nov26 tty3     00:00:00 /sbin/mingetty /dev/tty3
root      2987     1  0 Nov26 tty4     00:00:00 /sbin/mingetty /dev/tty4
root      2989     1  0 Nov26 tty5     00:00:00 /sbin/mingetty /dev/tty5
root      2996     1  0 Nov26 tty6     00:00:00 /sbin/mingetty /dev/tty6
root      2999   474  0 Nov26 ?        00:00:00 /sbin/udevd -d
root      3000   474  0 Nov26 ?        00:00:00 /sbin/udevd -d
root      5615  2686  0 Nov30 ?        00:00:07 sshd: root@pts/0
root      5620  5615  0 Nov30 pts/0    00:00:01 -bash
root      9739  5620  0 09:59 pts/0    00:00:00 ps -ef
root      9740  5620  0 09:59 pts/0    00:00:00 grep root
root     16808     2  0 Nov28 ?        00:00:00 [kworker/0:0]
root     17683     1  0 Nov30 ?        00:00:06 /usr/sbin/httpd
root     19810     2  0 Nov28 ?        00:04:08 [kworker/0:2]
root     20820     1  0 Nov28 ?        00:16:47 /usr/bin/consul agent -config-dir=/etc/consul.d
root     21102     1  0 Nov28 ?        00:05:02 /usr/bin/vault server -config=/etc/vault.d
[root@localhost 2853]#

As you can see this provides quite a good insight into what is running and what is not. However, it is not fully showing you all the details you might want to see. For example, we see that some of the lines show the exact path of the executable that is running under this process. For example mingetty (minimal getty for consoles). we can zoom in to mingetty with a grep as shown below;

[root@localhost 2489]# ps -ef |grep mingetty
root      2981     1  0 Nov26 tty1     00:00:00 /sbin/mingetty /dev/tty1
root      2983     1  0 Nov26 tty2     00:00:00 /sbin/mingetty /dev/tty2
root      2985     1  0 Nov26 tty3     00:00:00 /sbin/mingetty /dev/tty3
root      2987     1  0 Nov26 tty4     00:00:00 /sbin/mingetty /dev/tty4
root      2989     1  0 Nov26 tty5     00:00:00 /sbin/mingetty /dev/tty5
root      2996     1  0 Nov26 tty6     00:00:00 /sbin/mingetty /dev/tty6
root      9815  5620  0 10:04 pts/0    00:00:00 grep mingetty
[root@localhost 2489]#

If we look at the above we can be fairly sure that the executable for mingetty is located at /sbin/mingetty . However, if we start looking at the results of other lines this is not always that clear. As an example the HAL daemon hald (which is a just good example in this case). hald  is  a daemon that maintains a database of the devices connected to the system system in real-time. The daemon connects to the D-Bus system message bus to provide an API that applications can use to discover, monitor and invoke operations on devices.

[root@localhost 2489]# ps -ef|grep hald
68        2489     1  0 Nov26 ?        00:00:18 hald
root      2490  2489  0 Nov26 ?        00:00:00 hald-runner
68        2532  2490  0 Nov26 ?        00:00:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket
root      9864  5620  0 10:09 pts/0    00:00:00 grep hald
[root@localhost 2489]#

If we look clearly at the above we can learn a number of things. For once, the hald-addon-acpi is a child process of hald-runner and hald-runner is a child process of hald. we can also see that both hald and hald-addon-acpi are running under UID 68 which is the default UID for hald. However, what we are not able to see is what the actual executable is that is runnign between hald.

To find out the exact executable of hald we can find out by going to the /proc directory and than go to the subdirectory which is in line with the pid of the process. In our case this is /proc/2489 which is the directory which holds all the information about process 2489, our hald process. In this dirctory we will find a lot of interesting information;

[root@localhost /]# cd /proc/2489
[root@localhost 2489]# ls
attr        coredump_filter  fdinfo    mem         numa_maps      root       stat
auxv        cpuset           io        mountinfo   oom_adj        sched      statm
cgroup      cwd              latency   mounts      oom_score      schedstat  status
clear_refs  environ          limits    mountstats  oom_score_adj  sessionid  syscall
cmdline     exe              loginuid  net         pagemap        smaps      task
comm        fd               maps      ns          personality    stack      wchan
[root@localhost 2489]#

Even though all the files and directories within a process /proc/pid diretcory are interesting our goal was to find out what the actual running process behind pid 2489 from user UID 68 was. To find out we have to look at the exe which is a symbolic link. So we can do a ls -la command or in case we want this to be part of a bash script to find things out we can use the readlink command.

The simple ls command will be able to tell us in a human readabile manner what the executable is for this pid.

[root@localhost 2489]# ls -la exe
lrwxrwxrwx. 1 root root 0 Dec  3 10:03 exe -> /usr/sbin/hald
[root@localhost 2489]#

Even thought this is great and we just have been able to find out what the executable file of a pid is in case it is not listed in the output of ps we might want to include this in some bash script. The most easy way is using the readlink command which will provide the below;

[root@localhost 2489]# readlink exe
/usr/sbin/hald
[root@localhost 2489]#

Making sure you understand a bit more on how to drill into the information of what us running on your system will help you debug issues quicker and make sure you can implement more strict security and monitoring rules on your Oracle Linux systems. 

Thursday, December 08, 2016

Oracle Fabric Interconnect - connecting FC storage appliances in HA

The Oracle Fabric Interconnect is a hardware network appliance which enables you to simplify the way servers are connected to the network within your datacenter. The general idea behind the Oracle Fabric Interconnect is that you connect all your servers to one (or two for high availability) Oracle Fabric Interconnect appliances using Infiniband and virtualize all  needed NIC's and HBA's you might require on your server. The effect is that the number of cables you need per server is reduced enormously and that every change to your network infrastructure is suddenly a software change. This enables you to make use of software defined networking instead of pulling physical cables to your servers.

Oracle Fabric Interconnect employs virtualization to enable you to flexibly connect servers to networks and storage. It eliminates the physical storage and networking cards found in every server and replaces them with virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs) that can be deployed on the fly. Applications and operating systems see these virtual resources exactly as they would see their physical counterparts. The result is an architecture that is much easier to manage, far more cost-effective, and fully open.

As stated, Oracle Fabric Interconnect is a solution for networking, however it can also provide solutions to connect to fiber channel storage appliances. If you look at the below image you see how you can expand the number of modules within a Oracle Fabric Interconnect appliance.

This allows to add multiple types of modules. Oracle provide a number of modules which can be used to expand the standard Oracle Fabric Interconnect appliance. 

One of the expansion kits is to enable you to connect your Oracle Fabric Interconnect appliance to a Fibre Channel based storage appliance. For example a EMC storage appliance. A commonly seen implementation scenario for this is the Oracle Private Cloud Appliance where customers like to make use of the Oracle Private Cloud Appliance and will use the "onboard" ZFS storage for storing machine images however want to consume additional storage from existing EMC storage solution they already have deployed within their IT footprint. 

When adding the required I/O modules to the Oracle Fabric Interconnect appliance you can make virtual HBA's available to your servers. The virtual HBA's are actually directly talking via IPoIB to the Oracle Fabric Interconnect appliance I/O modules that connect to the EMC storage appliance. However, you do not have the need to add HBA's to all servers as they are already connected to the Oracle Fabric Interconnect appliance. 

To ensure you make this high available you will have to ensure you do the connections to your EMC storage appliance correct. A number of best practices are available. The best practice for the Oracle Private Cloud Appliance is shown below. 


Even though the above outlines the best practice to deploy a FC connection from a Oracle Private Cloud Appliance, in effect this is also the best practice for connection EMC via FC to your own deployed Oracle Fabric Interconnect appliance solution which you might use to connect all kinds of servers.

Friday, December 02, 2016

Oracle Linux - installing Consul as server

Consul, developed by hashicorp,  is a solution for service discovery and configuration. Consul is completely distributed, highly available, and scales to thousands of nodes and services across multiple datacenters. Some concrete problems Consul solves: finding the services applications need (database, queue, mail server, etc.), configuring services with key/value information such as enabling maintenance mode for a web application, and health checking services so that unhealthy services aren't used. These are just a handful of important problems Consul addresses.

Consul solves the problem of service discovery and configuration. Built on top of a foundation of rigorous academic research, Consul keeps your data safe and works with the largest of infrastructures. Consul embraces modern practices and is friendly to existing DevOps tooling. Consul is already deployed in very large infrastructures across multiple datacenters and has been running in production for several months. We're excited to share it publicly.

Installing Consul on Oracle Linux is relative easy. You can download Consul from the consul.io website and unpack it. After this you already have a directly working Consul deployment. In essence it is not requiring an installation to be able to function. However, to ensure you can use consul in a production system and it starts as a service you will have to do some more things.

First, make sure your consul binary is in a good location where it is accessible for everyone. For example you can decide to move it to /usr/bin where it is widely accessible throughout the system.

Next we have to make sure we can start it relatively easy. You can start consul with all configuration as command line options however you can also put all configuration in a JSON file which makes a lot more sense. The below example is the content of a file /etc/consul.d/consul.json which I created on my test server to make consul work with a configuration file. The data_dir specified is not the best location to store persistent data so you might want to select a different data_dir location for that.

{
  "datacenter": "private_dc",
  "data_dir": "/tmp/consul3",
  "log_level": "INFO",
  "node_name": "consul_0",
  "server": true,
  "bind_addr": "127.0.0.1",
  "bootstrap_expect": 1
}

Now we have ensure the configuration is located in /etc/consul.d/consul.json we would like to ensure that consul is starting the consul server as a service every time the machine boots. I used the below code as the init script in /etc/init.d

#!/bin/sh
#
# consul - this script manages the consul agent
#
# chkconfig:   345 95 05
# processname: consul

### BEGIN INIT INFO
# Provides:       consul
# Required-Start: $local_fs $network
# Required-Stop:  $local_fs $network
# Default-Start: 3 4 5
# Default-Stop:  0 1 2 6
# Short-Description: Manage the consul agent
### END INIT INFO

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

exec="/usr/bin/consul"
prog=${exec##*/}

lockfile="/var/lock/subsys/$prog"
pidfile="/var/run/${prog}.pid"
logfile="/var/log/${prog}.log"
sysconfig="/etc/sysconfig/$prog"
confdir="/etc/${prog}.d"

[ -f $sysconfig ] && . $sysconfig

export GOMAXPROCS=${GOMAXPROCS:-2}

start() {
    [ -x $exec ] || exit 5
    [ -d $confdir ] || exit 6

    echo -n $"Starting $prog: "
    touch $logfile $pidfile
    daemon "{ $exec agent $OPTIONS -config-dir=$confdir &>> $logfile & }; echo \$! >| $pidfile"

    RETVAL=$?
    [ $RETVAL -eq 0 ] && touch $lockfile
    echo
    return $RETVAL
}

stop() {
    echo -n $"Stopping $prog: "
    killproc -p $pidfile $exec -INT 2&& $logfile
    RETVAL=$?
    [ $RETVAL -eq 0 ] && rm -f $pidfile $lockfile
    echo
    return $RETVAL
}

restart() {
    stop
    while :
    do
        ss -pl | fgrep "((\"$prog\"," > /dev/null
        [ $? -ne 0 ] && break
        sleep 0.1
    done
    start
}

reload() {
    echo -n $"Reloading $prog: "
    killproc -p $pidfile $exec -HUP
    echo
}

force_reload() {
    restart
}

configtest() {
    $exec configtest -config-dir=$confdir
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart)
        $1
        ;;
    reload|force-reload)
        rh_status_q || exit 7
        $1
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 7
        restart
        ;;
    configtest)
        $1
        ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
        exit 2
esac

exit $?

As soon as you have the above code in the /etc/init.d/consul file and make sure this file is executable you can use chkconfig to add it as a system service and it will ensure consul is stopped and started in the right way whenever you stop or start your server. This makes your consul server a lot more resilient and you do not have to undertake any manual actions when you restart your Oracle Linux machine.

You are able to find the latest version of the script and the configuration file on my github repository. This is tested on Oracle Linux 6. It will most likely also work on other Linux distributions however it is not tested.

Thursday, December 01, 2016

Oracle Linux – short tip #2 – reuse a command from history

Whenever using Linux from a command line you will be typing a large number of commands throughout the day in your terminal. Every now and then you want to review what command you used to achieve something, or you might even want to reuse a command from the history. As your terminal will only show a limited number of lines and you cannot scroll backwards endlessly you can make use of history. The history command will show you a long list of commands you executed.

As an example of the history command you can see the output of one of my machines:

[root@localhost ~]# history
    1  ./filebeat.sh -configtest -e
    2  service filebeat.sh status
    3  service filebeat status
    4  chkconfig --list
    5  chkconfig --list | grep file
    6  chkconfig --add filebeat
    7  service filebeat status
    8  service filebeat start
    9  cd /var/log/
   10  ls
   11  cat messages
   12  cd /etc/filebeat/
   13  ls
   14  vi filebeat.yml
   15  service filebeat stop
   16  service filebeat start
   17  date
   18  tail -20 /var/log/messages
   19  date
   20  tail -f /var/log/messages
   21  clear

Having the option to travel back in time a review which commands you used is great, especially if you are trying to figure out something and have tried a command a number of times in different ways and you are no longer sure what some of the previous “versions” of your attempt where.

An additional trick you can do with history is to reuse the command by simply calling it back from history without the need to enter it again. As an example, in the above example you can notice that line 17 is date. If we want to reuse it we can simply do a !17 on the command line interface. As an example we execute 17 again.

[root@localhost ~]# !17
date
Sun Nov 27 13:41:55 CET 2016
[root@localhost ~]#

Oracle Linux – short tip #1 – using apropos

It happens to everyone, especially on Monday mornings, you suddenly cannot remember a command which normally is at the top of your head and you used a thousand times. The way to find the command you are looking for while using Linux is making use of the apropos command. apropos  searches  a set of database files containing short descriptions of system commands for keywords and displays the result on the standard output.

As an example, I want to do something with a service however not sure which command to use or where to start researching for it. We can use apropos to take a first hint as shown below:

[root@localhost ~]# apropos "system service"
chkconfig            (8)  - updates and queries runlevel information for system services
[root@localhost ~]#

As another example, I want to do something with utmp and I want to know which commands would be providing me functionality to work with utmp. I can use the below apropos command to find out.

[root@localhost ~]# apropos utmp
dump-utmp            (8)  - print a utmp file in human-readable format
endutent [getutent]  (3)  - access utmp file entries
getutent             (3)  - access utmp file entries
getutid [getutent]   (3)  - access utmp file entries
getutline [getutent] (3)  - access utmp file entries
getutmp              (3)  - copy utmp structure to utmpx, and vice versa
getutmpx [getutmp]   (3)  - copy utmp structure to utmpx, and vice versa
login                (3)  - write utmp and wtmp entries
logout [login]       (3)  - write utmp and wtmp entries
pututline [getutent] (3)  - access utmp file entries
setutent [getutent]  (3)  - access utmp file entries
utmp                 (5)  - login records
utmpname [getutent]  (3)  - access utmp file entries
utmpx.h [utmpx]      (0p)  - user accounting database definitions
wtmp [utmp]          (5)  - login records
[root@localhost ~]#

It is not the best solution and you have to be a bit creative in understanding how the string apropos uses could be defined however, in general it can be a good start when looking for a command while using Linux. 

Oracle - Profitability and Cost Management Cloud Service

Often it is hard to find the true costs of a product and ensure you make a true calculation for the profitability of a product. As an example, a retail organization might figure the sales price minus the purchase price is the profitability of a single product. The hidden overall costs for overhead, transportation, IT services and others are often deducted from the overall company revenue. Even though this will ensure you have the correct overall company revenue it is not giving you the correct profitability figures per product of services.

Not being able to see on a product or service level what the exact profitability is might result in having a sub-optimal set of products. Being able to see on a product or service level which products and services are profitable and which are not can help companies to create a clean portfolio and become more profitable overall.


With the launch of the profitability and cost management cloud service Oracle tries to provide a solution for this.

It takes information streams like production cost figures, facility costs figures and human resource cost figures and combines those with the information from your core general ledger. The combined set of figures is loaded into the performance ledger to enable the profitability and cost management cloud to analyze and calculate the true costs of a product.

The core of the product is a web-based interface which allows analysts to combine, include and exclude sets of data to create a calculation model which will enable them to see the true costs which included the hidden costs.

As profitability and cost management cloud makes use of Hyperion in the background you are also able to use the smart view for office options and include the data and the data model results you crate in profitability and cost management cloud in your local excel. As a lot of people still tend to like Microsoft excel and do some additional analysis in Excel the inclusion of the smart view for office options makes a lot of sense to business users.

In the above video you can see an introduction to the new cloud service from Oracle. 

Sunday, November 27, 2016

Oracle Linux - Consul failed to sync remote state: No cluster leader

Whenever you are installing and running Consul from HashiCorp on Oracle Linux you might run into some strange errors. Even though your configuration JSON file passes the configuration validation the log file contains a long repetitive lst of the same error complaining about "failed to sync remote state: No cluster leader" and " coordinate update error: No cluster leader".

Consul is a tool for service discovery and configuration. It provides high level features such as service discovery, health checking and key/value storage. It makes use of a group of strongly consistent servers to manage the datacenter. Consul is developed by HasiCorp and is available from its own website.

It might be that you have the below output when you start consul:

    2016/11/25 21:03:50 [INFO] raft: Initial configuration (index=0): []
    2016/11/25 21:03:50 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
    2016/11/25 21:03:50 [INFO] serf: EventMemberJoin: consul_1 127.0.0.1
    2016/11/25 21:03:50 [INFO] serf: EventMemberJoin: consul_1.private_dc 127.0.0.1
    2016/11/25 21:03:50 [INFO] consul: Adding LAN server consul_1 (Addr: tcp/127.0.0.1:8300) (DC: private_dc)
    2016/11/25 21:03:50 [INFO] consul: Adding WAN server consul_1.private_dc (Addr: tcp/127.0.0.1:8300) (DC: private_dc)
    2016/11/25 21:03:55 [WARN] raft: no known peers, aborting election
    2016/11/25 21:03:57 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:04:14 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:04:30 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:04:50 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:05:01 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:05:26 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:05:34 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:06:02 [ERR] agent: coordinate update error: No cluster leader
    2016/11/25 21:06:10 [ERR] agent: failed to sync remote state: No cluster leader
    2016/11/25 21:06:35 [ERR] agent: coordinate update error: No cluster leader

The main reason for the above is that you try to start consul in an environment where there is no cluster available, or it is the first node of the cluster. In case you start it as the first node of the cluster or as the only node of the cluster you have to ensure that you include -bootstrap-expect 1 as a command line option when starting (in case you will only have one node).

You can also include "bootstrap_expect": 1 in the json configuration file if you use a configuration file to start Consul.

As an example, the below start of Consult will prevent the above errors:

consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul

Friday, November 25, 2016

Oracle Linux - build Elasticsearch network.host configuration

With the latest version of Elasticsearch the directives used to ensure your Elasticsearch daemon is listening to the correct interfaces on your Linux machine have changed. By default Elasticsearch will listen on your local interface only which is a bit useless in most cases.

Whenever deploying Elasticsearch manually it will not be a problem to configure it manually, however, we are moving more and more to a world where deployments are done fully automatic. In case you use fully automatic deployment and depend on bash scripting to do some of the tasks for you the below scripts will be handy to use.

In my case I used the below scripts to automatically configure Elasticsearch on Oracle Linux 6 instances to listen on all available interfaces to ensure that Elasticsearch is directly useable for external servers and users.

To ensure your Elasticsearch daemon is listening on all ports you will have to ensure the below line is available, at least in my case as I have 2 external and one local loopback interface in my instance.

network.host: _eth0_,_eth1_,_local_

When you are sure your machine will always have 2 external network interfaces and one local loopback interface you want Elasticsearch to listen on you could hardcode this. However, if you want to make a more generic and stable solution you should read the interface names and build this configuration line.

The ifconfig command will give you the interfaces in a human readable format which is not very useable in a programmatic manner. However, ifconfig will provide you the required output which means we can use it in combination with sed to get a list of the interface names only. The below example shows this:

[root@localhost tmp]# ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d'
eth0
eth1
[root@localhost tmp]#

However, this is not in the format we want it, so we have to create a small script to make sure we do get it more in the format we want it. The below code example can be used for this:

#!/bin/bash

  for OUTPUT in $(ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d')
  do
   echo "_"$OUTPUT"_"
  done

If we execute this we will have the following result:

[root@localhost tmp]# ./test.sh
_eth0_
_eth1_
[root@localhost tmp]#

As you can see it is looking more like how we want to have this as input for the Elasticsearch configuration file however we are not fully done. First of all the _local_ is missing and we still have it in a multi-line representation. The below code example shows the full script you can use to build the configuration line. We have added the _local_ and we use awk to make sure it is one comma separated line you can use.

#!/bin/bash
 {
  for OUTPUT in $(ifconfig -a |sed 's/[ \t].*//;/^\(lo\|\)$/d')
  do
   echo "_"$OUTPUT"_"
  done
echo "_local_"
 } | awk -vORS=, '{ print $1 }' | sed 's/,$/\n/'

If we run the above code we will get the below result:

[root@localhost tmp]# ./test.sh
_eth0_,_eth1_,_local_
[root@localhost tmp]#

You can use this in a more wider script to ensure it is written (including network.host:) to the /etc/elasticsearch/elasticsearch.yml file which is used by Elasticsearch as the main configuration file. As stated, I used this script and tested in while deploying Elasticsearch on Oracle Linux 6. It is expected to be working on other Linux distributions however it has not been tested.

Monday, November 21, 2016

Using Oracle cloud to integrate Salesforce and Amazon hosted SAP

Oracle Integration Cloud Service (ICS) delivers “Hybrid” Integration. Oracle Integration Cloud Service is a simple and powerful integration platform in the cloud to maximize the value of your investments in SaaS and on-premises applications. It includes an intuitive web based integration designer for point and click integration between applications and a rich monitoring dashboard that provides real-time insight into the transactions, all running on Oracle Public Cloud. Oracle Integration Cloud Service will help accelerate integration projects and significantly shorten the time-to-market through it's intuitive and simplified designer, an intelligent data mapper, and a library of adapters to connect to various applications.

Oracle Integration Cloud Service can also be leveraged during a transition from on premise to cloud or by building a multi-cloud strategy. As an example, Oracle provides a standardized connection between Salesforce and SAP as shown above.


An outline of how to achieve this integration is shown in the below video which outlines the options and easy of developing an integration between Salesforce and SAP and ensure the two solution work as an integrated and hybrid solution.



As enterprises start to move more and more to a full cloud strategy ensuring you have a central ingratiation point already in the cloud positioned is ideal when moving to a more cloud based strategy. As an example, SAP can run on Amazon. During a test and migration path to the cloud you most likely do want to ensure you can test your integration between salesforce and SAP without the need of re-coding and re-developing integration.



By ensuring you use Oracle Integration Cloud Service as your central integration solution the move to a cloud strategy for your non-cloud native applications becomes much more easy. You can add a second integration during your test and migration phase and when your migration to, for example, Amazon has been completed you can discontinue your integration to your old on premise SAP instances.


This will finally result in an all-cloud deployment where you have certain business functions running in Saleforce, your SAP systems running in Amazon while you leverage Oracle Integration Cloud Service to bind all systems together and make it a true hybrid multi-cloud solution.

Monday, November 14, 2016

Oracle Cloud API - authenticate user cookie

Whenever interacting with the API’s from the Oracle Compute service cloud the thing first thing that needs to be done is to authenticate yourself against the API. This is done by providing the required authentication details and in return you will receive a cookie. This cookie is used for the future API calls you will do until the cookie lifetime expiries.

The Oracle documentation shows an example as shown below:

curl -i -X POST -H "Content-Type: application/oracle-compute-v3+json" -d "@requestbody.json" https://api-z999.compute.us0.oraclecloud.com/authenticate/

A couple of things you have to keep in mind when looking at example; this will execute the curl command against the REST API endpoint URL for the US0 cloud datacenter and it expect to have a file named requestbody.json available with the “payload” data.

In the example the payload file has the following content:
{
 "password": "acme2passwrd123",
 "user": "/Compute-acme/jack.jones@example.com"
}

The thing to keep in mind when constructing your own payload JSON is that the Compute- part in Compute-acme should remain the “Compute-“ part. Meaning, if your identity domain is “someiddomain” it should look like “Compute- someiddomain” and not “someiddomain”

When executing the curl command you will receive the response like shown below:

HTTP/1.1 204 No Content
Date: Tue, 12 Apr 2016 15:34:52 GMT
Server: nginx
Content-Type: text/plain; charset=UTF-8
X-Oracle-Compute-Call-Id: 16041248df2d44217683a6a67f76a517a59df3
Expires: Tue, 12 Apr 2016 15:34:52 GMT
Cache-Control: no-cache
Vary: Accept
Content-Length: 0
Set-Cookie: nimbula=eyJpZGVudGl0eSI6ICJ7XC...fSJ9; Path=/; Max-Age=1800
Content-Language: en

The part that you need, and the actual cookie data which need to be used in later API calls is actually only a subpart of the response received. The part you need is:

nimbula=eyJpZGVudGl0eSI6ICJ7XC...fSJ9; Path=/; Max-Age=1800

The rest of the response is not directly needed.