Monday, August 31, 2015

Oracle Linux local firewalls -- firewalld

A question that comes to me quite often is the question if local firewalls should be used in Linux. Often the question comes from administrators of the operating system who do not “like” to maintain the firewalls all locally and would like to have the network team to take care of this on a network level. This question is also often posed by DBA’s and developers who need to access the systems often and are involved in changes to the systems. Every time they need to have a port open or a new route between machines added they have to go through a change management process in relation to local firewalls and would rather see this is not implemented.

As I do work on Linux (and other operating systems) regularly in a changing role I do sympathies with the statements and do understand the questions and reasons behind this. From one of my roles as a security consultant and architect I do however not agree with the statement that security should be managed by the network team and local firewalls are nothing more than an annoyance.

A recent post on a mailing list around a different subject gave me the opportunity to again come back to my topic of defending the use of local firewalls.

I am particularly interested in confirming that low-risk servers can’t be used as a stepping stone to attack a high-risk server, or as a means of unauthorised data egress.

The above quote is out of context due to sharing restrictions, however, the full mail started a discussion on the topic of local firewalls. Taking the above quote already provides some clues on why local firewalls are important.

If we take the below architecture deployment for a “standard” implementation of an Oracle based landscape. The landscape is using Oracle Linux and hosts Oracle software.


Most implementations are based upon the above principle, they will have a DMZ which hosts the external facing services and those machines will connect to the back-end of the application which in our case is an Oracle RAC database implementation in combination with an Oracle NoSQL Key-Value store.

In principle nothing is wrong with this picture, if all is done correctly the external facing ports will only allow traffic on the ports that are needed and the firewall will block all other traffic. The same will be applicable for the firewall between the DMZ and the back-end systems. However, in case that an attacker is able to gain access to one of the hosts on the DMZ you will have the situation that the external firewall is not protecting you against communication between the compromised host and the other hosts in this specific VLAN. The attacker will be able to communicate extremely more easy between the hosts by making use of the lack of a firewall between those hosts.

The below implementation is showing a more rigid model in which all host have a local firewall, in case one of the hosts in compromised the ability of an attacker is limited to that host only and the options to connect to other hosts is extremely limited.

Due to this the administrators and the security team have much more time to fight back against the attack and the overall security of the landscape is significantly raised.

Many people opt against implementing local firewall rules due to the fact that it introduces a management overhead. In my personal opinion the use of local firewalls should however be promoted as a standard and only being considered to not implement by exception rather than the other way around. Currently you see that the default is to not implement it and only in some exceptional cases customers decide to implement it.

Now, if you implement local firewalls on Linux the most common solution to look for is iptables. However, as from Oracle Linux 7 the default firewall is no longer iptables anymore, it will be using a firewalld firewall instead. Firewalld is using the well known netfilter solution. Netfilter is the packet filtering framework inside the Linux 2.4.x and later kernel series. A netfilter kernel component consisting of a set of tables in memory for the rules that the kernel uses to control network packet filtering.

Oracle states the following about firewalld-based firewalls within the OL7 documentation:

The firewalld-based firewall has the following advantages over an iptables-based firewall:

  1. Unlike the iptables and ip6tables commands, using firewalld-cmd does not restart the firewall and disrupt established TCP connections.
  2. firewalld supports dynamic zones, which allow you to implement different sets of firewall rules for systems such as laptops that can connect to networks with different levels of trust. You are unlikely to use this feature with server systems.
  3. firewalld supports D-Bus for better integration with services that depend on firewall configuration.


Especially the point about dynamic zones is very interesting if you need to maintain a large set of firewalls. You can create zones and apply them for the systems where they are needed, however ship all your installations with it. For example a default installation could contain a set of rules (zones) for a number of situations in your enterprise footprint, you activate them where they are appropriate.

To configure the settings of firewalld you can make use of a GUI by calling firewall-config or you can use the CLI by calling firewall-cmd


Monday, August 17, 2015

Joining a startup

Ever thought about working for a startup, ever decided not to do it because you thought it was finically a gamble and a risk considered to big? You might have done the right thing, if you are not able to take a financial risk and you have a family to support then joining a startup might not be the best choice for you. Specially considering that 90% of the startups fail and a lot of them fail before they make profit or even are able to pay a reasonable amount to the people who work for them.

However, for those who do have the option to take a financial risk and / or have an entrepreneurial way of thinking, for those who love the drive of people working at a startup, the upside can be big. Some startups become good running businesses and some, a very small amount of the remaining 10% will become very big.

Having joined a startup from the beginning and it becomes big, very big, the value per employee can skyrocket. And a large part of those companies do recognise the endless efforts the employees have put into it to make this happen.



As you can see in the above bubble chart, some companies have a very high value per employee. So, yes, working for a startup can be a risk. However, if it becomes a success it can become a very big success with a big financial gain. You should however never join a startup with this as a goal, join a startup for the fun and for the mindset that lives in such a company.

Tuesday, June 30, 2015

Oracle virtualbox failed to open /dev/vboxnetctl - resolved

Virtualbox is the virtualization solution from Oracle to virtualize operating systems on your workstation. Used by many to run for example test and development servers locally. I use it on most of my workstations for this purpose, recently I had a new issue on my macbook. When trying to setup an internal network I was presented with the following error:


Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory

For some reason Virtualbox has gone south. Most solutions presented online involved restarting Oracle Virtualbox to resolve the issue. Restarting Oracle virtualbox under Mac OS can be done by executing sudo ./VirtualBox restart in the /Library/StartupItems/VirtualBox directory. For some reason this was causing another issue.

The restart command presented me with the below message instead of a clean restart:

Unloading VBoxDrv.kext
(kernel) Can't remove kext org.virtualbox.kext.VBoxDrv; services failed to terminate - 0xe00002c7.
Failed to unload org.virtualbox.kext.VBoxDrv - (iokit/common) unsupported function.
-v Error: Failed to unload VBoxDrv.kext
-f VirtualBox

After checking I stopped Virtualbox I found out that some background processes where still operational. Killing them and rerunning the restart command resolved all the issues and Virtualbox was able to function again as can be be expected from it. The restart command after killing the background processes presented me with the below:

Unloading VBoxDrv.kext
/Applications/VirtualBox.app/Contents/MacOS/VBoxAutostart => /Applications/VirtualBox.app/Contents/MacOS/VBoxAutostart-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxBalloonCtrl => /Applications/VirtualBox.app/Contents/MacOS/VBoxBalloonCtrl-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxDD2GC.gc => /Applications/VirtualBox.app/Contents/MacOS/VBoxDD2GC.gc-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxDDGC.gc => /Applications/VirtualBox.app/Contents/MacOS/VBoxDDGC.gc-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxExtPackHelperApp => /Applications/VirtualBox.app/Contents/MacOS/VBoxExtPackHelperApp-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxHeadless => /Applications/VirtualBox.app/Contents/MacOS/VBoxHeadless-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxManage => /Applications/VirtualBox.app/Contents/MacOS/VBoxManage-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxNetAdpCtl => /Applications/VirtualBox.app/Contents/MacOS/VBoxNetAdpCtl-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxNetDHCP => /Applications/VirtualBox.app/Contents/MacOS/VBoxNetDHCP-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxSVC => /Applications/VirtualBox.app/Contents/MacOS/VBoxSVC-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxXPCOMIPCD => /Applications/VirtualBox.app/Contents/MacOS/VBoxXPCOMIPCD-amd64
/Applications/VirtualBox.app/Contents/MacOS/VMMGC.gc => /Applications/VirtualBox.app/Contents/MacOS/VMMGC.gc-amd64
/Applications/VirtualBox.app/Contents/MacOS/VirtualBox => /Applications/VirtualBox.app/Contents/MacOS/VirtualBox-amd64
/Applications/VirtualBox.app/Contents/MacOS/VirtualBoxVM => /Applications/VirtualBox.app/Contents/MacOS/VirtualBoxVM-amd64
/Applications/VirtualBox.app/Contents/MacOS/vboxwebsrv => /Applications/VirtualBox.app/Contents/MacOS/vboxwebsrv-amd64
Loading VBoxDrv.kext
Loading VBoxUSB.kext
Loading VBoxNetFlt.kext
Loading VBoxNetAdp.kext

Thursday, May 28, 2015

Exalogic based Big Data Strategy

I recently published a whitepaper on Exalogic based Big Data Strategy which go's primarily into how you can capture the data from, as an example, sensors. Most big data strategies go into how you can handle the data as soon as it lands inside you Hadoop cluster. There is however also a need for a clear strategy on how you capture the data before you can use it.

Not having a clear strategy on capturing data to be used within a wider big data strategy can be killing for a big data project. This paper go's into how you can use Oracle Exalogic in this process to ensure that you have a flexible and well performing solution for data acquisition.

You can find the article at the Capgemini website or you can read it below.

Oracle building blocks for future enterprise services

As we observe the direction enterprises are heading into with regards of their IT footprint we can observe a number of interesting trends. None of them are new, however, we see them picking up more and more momentum and becoming the new standard within enterprise IT. If we take a look at some of the directions enterprises are moving into and what the demands are from the internal users in the form of business departments we see the challenges faced.

The questions asked by the business are in some cases against the traditional way of working and doing things. To be able to implement them and satisfy the business some radical change is needed in some cases. Not only in the way IT departments work, also in the way the entire IT landscape is architected and how the entire IT landscape traditionally is build.

To be able to move from a traditional way of working, in most cases, a combination of application as well as infrastructure modernization and rationalization is needed.

To read the full blogpost please visit the Capgemini.com Oracle blog.

Tuesday, April 14, 2015

Calculate DOM0 memory size

When using Oracle VM server as a virtualization platform you will have to ensure that DOM0 has enough memory allocated to it. Dom0 is the initial domain started by the Xen hypervisor on boot. Dom0 is an abbrevation of "Domain 0" (sometimes written as "domain zero" or the "host domain"). Dom0 is a privileged domain that starts first and manages the DomU unprivileged domains.

To ensure the correct amount of memory is allocated to DOM0 Oracle propagates the below mentioned algorithm to be used:

dom0_mem = 502 + int(physical_mem * 0.0205)

As an example this would mean that a server with 2 GB of physical memory would need 543MB of memory allocated to DOM0. A server with 32 GB of physical memory would need 1173MB of memory allocated to DOM0..

To change the Dom0 memory allocation, edit the /boot/grub/grub.conf file on the Oracle VM Server and change the dom0_mem parameter, for example, to change the memory allocation to 1024 MB, edit the file to be:

kernel /xen.gz console=com1,vga com1=38400,8n1 dom0_mem=1024M

Saturday, February 28, 2015

Using XenStore in Oracle VM

When you are developing a private cloud based upon the Oracle portfolio you will most likely make use of Oracle VM for your non-Sparc deployments. It is good to know that Oracle VM is based upon the opensource Xen Hypervisor developed by the Xen project.

The Xen Project hypervisor is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). The Xen Project hypervisor is the only type-1 hypervisor that is available as open source. It is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances. The Xen Project hypervisor is powering the largest clouds in production today.

The above statement from the Xen project makes the statement that it is powering some of the largest clouds in production today which is a good thing to know if you are using Oracle VM. This means that the code adopted within Oracle VM is also empowering numerous other clouds and very large clouds for that matter.

To be able to make full use of the Xen part that makes Oracle VM it is advisable to start understanding how Xen by itself works. Reason for this is that Oracle has not adopted, or documented all features from Xen in its full extend while they are still largely available for you to use.

Even though Oracle has provided a great implementation of the Xen hypervisor and the tooling provided, especially with the integration of Oracle Enterprise Manager, is making life easy there are moments you want to do more than the standard implementation allows you. One of my recent experiences where this was the case was in relation to communication to XenStore. In basics XenStore is a shared storage space between the different domains running on the hypervisor. XenStore is maintained by the xentstored deamon in Dom0 and the operating systems running on the DomU’s can communicate with it via the XenBus.

Even though the default way of communicating to the XenStore is via a number of commands which do not require that you need to know the exact location of the XenStore data the data is actually located in a file. You can find the XenStore data file /var/lib/xenstored where the file is located and named tdb on Dom0. The name TDB stands for Tree Database.

One of the things that enables XenStore you to do is to retrieve information from the XenStore. When you are building more advanced deployment models you can, for example, store information in the XenStore and read this when you boot a guest for the first time and use this input in the configuration process. Also, having access to the XenStore from your guest VM can help you build better reports from the Guest point of view. Gregory Guillou wrote a great blogpost on this subject, the reason he was interested in using the XenStore was to be able to find the relation of a disk presented to a VM and the underlying storage from a VM guest point of view. This could help him to write additional code to do a snapshot from storage for which he needs to have the information about the underlying storage.

Compiling the XenStore tools
Oracle is not providing a RPM for the XenStore tools however you are able to download the Oracle VM sourcecode which contains the source code of the XenStore tools that you can then compile yourself. Or, when you need to do this often you can create a RPM yourself so you can easily distribute the XenStore tools to your guest VM’s.

Once you have downloaded the Oracle VM source code from the Oracle Download site you should open the .iso file and locate the correct source RPM in the SRPMS directory. For the version I am currently running this is xen-4.1.3-25.el5.94.src.rpm however naming can differ per version. Upload the .rpm file to the guest VM and install it.

[root@test1 ~]# rpm -ivh xen-4.1.3-25.el5.94.src.rpm
warning: xen-4.1.3-25.el5.94.src.rpm: Header V3 DSA/SHA1 Signature, key ID 1e5e0159: NOKEY
   1:xen                    ########################################### [100%]
[root@test1 ~]#

This should have provided you a new directory under your account named /rpmbuild/SOURCES in our case this is /root/rpmbuild/SOURCES which contains a large set of files. The only file we are interested in is the file that contains the source code to be used to compile the XenStore tools. In our case this is xen-4.1.3-ovs.tar.gz

[root@test1 SOURCES]# cd /root/rpmbuild/SOURCES/
[root@test1 SOURCES]#
[root@test1 SOURCES]# tar -zxvf xen-4.1.3-ovs.tar.gz

The above extracts the sources we need (and others) and we can go into the location where the sourcecode for the XenStore tools is located and make the code. However before you can make it you have to ensure you have some prerequisites completed that will have to be there before you can compile.  The below yum install command will install all required prerequisites if they are not installed yet. The below has been tested on Oracle Linux 6 and has been extended with gettext and patch based upon the information from the blog written by Gregory Guillou on this subject.

yum install oracle-rdbms-server-11gR2-preinstall libuuid-devel openssl-devel ncurses-devel dev86 iasl python-devel SDL-devel gettext patch

As soon as you have ensured that the prerequisites are all in place you can start the make command in the right directory.

[root@test1 tools]# cd /root/rpmbuild/SOURCES/xen-4.1.3-ovs/tools
[root@test1 tools]# make 

When done without any errors or warnings you can now use this to install the XenStore tools onto your guest VM.

[root@test1 tools]# cd /root/rpmbuild/SOURCES/xen-4.1.3-ovs/tools/misc
[root@test1 tools]#
[root@test1 misc]# install xen-detect /usr/local/bin
[root@test1 xenstore]# cd /root/rpmbuild/SOURCES/xen-4.1.3-ovs/tools/xenstore
[root@test1 xenstore]#
[root@test1 xenstore]# install libxenstore.so.3.0 /usr/local/lib
[root@test1 xenstore]# install xenstore xenstore-control /usr/local/bin
[root@test1 xenstore]#
[root@test1 xenstore]# cd /usr/local/bin
[root@test1 bin]#
[root@test1 bin]# ln -f xenstore xenstore-chmod
[root@test1 bin]# ln -f xenstore xenstore-exists
[root@test1 bin]# ln -f xenstore xenstore-list
[root@test1 bin]# ln -f xenstore xenstore-ls
[root@test1 bin]# ln -f xenstore xenstore-read
[root@test1 bin]# ln -f xenstore xenstore-rm
[root@test1 bin]# ln -f xenstore xenstore-write
[root@test1 bin]#

This in basics should have compiled and installed the XenStore tooling into your Guest VM. As you can see, there is a reason why you should want to create your own RPM in case you need to install this on more than one machine. However, the above gives you a good insight and starting point to build your own RPM.

Using the XenStore tools
Before you can use the XenStore tools you will have to set the correct path for LD_LIBRARY_PATH and you will have to mount Xen File System.

[root@test1 local]# export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
[root@test1 local]# mount -t xenfs none /proc/xen

Once you have done this you can use the XenStore tools. In basics you have xenstore-chmod, xenstore-exists, xenstore-list, xenstore-ls, xenstore-read, xenstore-rm and xenstore-write to your exposal.

Oracle Linux disable selinux

Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including United States Department of Defense–style mandatory access controls (MAC). In some cases running SELinux is a good idea and should certainly be included in server builds for certain customers. However, in some cases it can also be a very big and troublesome security feature you  do not want to be active.

One of the cases where you do not want to have SELinux active is a development or research box. In this case I use the term research box for the machine you might have for yourself to play with and try new things. I do, as most people working with Oracle Linux a lot, have a large set of research boxes in a virtualized manner as well as development machines. All used for a specific task of project and when the project is done the installation is removed.

When you use a machine for this you do most likely do not want to have SELinux in place. Disabling SELinux is a small task.

SELinux is enabled via the configuration file: /etc/selinux/config to disable it simply ensure SELINUX=disabled is set in this file. This should turn of SELinux completely for your Oracle Linux installation.

As stated, SELinux is not a bad choice in secure environments, it can only be a hinder when you are testing new functionality. Understanding SELinux and using it in the right way can be a learning curve, to understand the benefits of SELinux it is good to have a look at the below video which provides an introduction.

Tuesday, January 06, 2015

Compile Google Protocol Buffers On Oracle Linux

Google has released a lot of code as open source software. Free for download and free to use under different open source licenses. One of the software packages released is protobuf, protobuf is google's data interchange format. Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages – Java, C++, or Python.

When trying to compile RethinkDB on a Linux system you will notice that Protobuf is a prerequisite  and needs to be available. In general this is not an issue for most Linux distributions as you are able to download installation packages for it. However, when you do want to install it for Oracle Linux you will find that it is not available for Oracle Linux which leaves you with the source to compile which you need to compile to be able to use it.

As compiling is something which is generally not done by standard users and as Protobuf is something used primarily by developers do not expect everything to be as simple as you think. Some code hacking might be required to get the Google code working on your Oracle Linux machine.

During the first attempt I encountered a bug stating:
configure.ac:57: error: possibly undefined macro: AM_PROG_AR

After some hacking in the code it became clear that line 57 in configure.ac can be commented out when you encounter this issue. For some reason the macro is called and checked however not used in the rest of the code. After commenting out the line you can make and install the code without any issue.

Steps needed to get Protobuf working on Oracle Linux are the following:

1) Get the sourcecode from github.
$ git clone https://github.com/google/protobuf

2.a) Run autogen to automatically generate the config script
$ ./autogen.sh

2.b) If you run into the AM_PROG_AR issue open the configure.ac file and comment out the associated line

3)
run configure to prepare for make
$ ./configure

4)
Make the source code
$ make

5) check the make results
$ make check

6) install Protobuf
$ make install

By now you should have a completed installation of Google Protobuf. You will now be able to use it, in my case to compile the source from the RethinkDB project to create a RethinkDB instance on my Oracle Linux server.

Monday, January 05, 2015

Compile node.js on Oracle Linux

On a frequent basis new development techniques are developed, some get adopted and some do not. New programming languages are introduced and some become popular and some die in the first or second stage of their existence. Recently Node.js has been introduced and has seen a rapid acceptance, specially with back-end developers for mobile applications and web-based platforms who do have the need for strong API based solutions.

Node.js is an open source, cross-platform runtime environment for server-side and networking applications. Node.js applications are written in JavaScript, and can be run within the Node.js runtime on OS X, Microsoft Windows, Linux and FreeBSD.

Node.js provides an event-driven architecture and a non-blocking I/O API that optimizes an application's throughput and scalability. These technologies are commonly used for real-time web applications.

Installing Node.js can be done, for a number of Linux distributions via a package manager. Packages have been provided and installation is a painless task. For other distributions the installation of Node.js is a bit more complex as their is not a mainstream installation package available. For Oracle Linux their is no Node.js mainstream RPM available, this means that you will need to compile Node.js from source to make it available on your Oracle Linux server.

Ensuring you can use Node.js on your Oracle Linux server will take the following steps to complete:

1) Download the sourcecode
wget http://nodejs.org/dist/v0.10.35/node-v0.10.35.tar.gz

2) extract the downloaded sourcecode
tar zxf node-v0.10.35.tar.gz

3) decend into the new directory
cd node-v0.10.35

4) configure before compiling
./configure

5) make and install node.js
sudo make install

This should give you a fresh compiled version of Node.js on Oracle Linux. You can check if your compilation has succeeded by executing node with the --version option.

Sunday, December 28, 2014

Oracle Enterprise Metadata Management for Big Data

When talking about big data the term data lake is often used, the term is originally introduced by James Dixon, Pentaho CTO. The term refers to gathering all available data so it can be used in a big data strategy. By introducing this term James Dixon was correct and part of collecting all data can be part of your big data strategy. However, there is a need to ensure your data lake is not turning into a data swamp. Gartner states some warning on the data lake approach in the “Gartner Says Beware of the Data Lake Fallacy” post on the Gartner website.

In my recent post on the Capgemini website I go into details on Oracle Enterprise Metadata Management and other Oracle products that can be used in combination with the Capgemini Big Data approach to ensure enterprises do get the best possible benefits from implementing a Big Data strategy.


Capgemini promotes a flow which includes Acquisition, Marshalling, Analysis and Action steps all supported by a Master Data Management & Data Governance solution.

Saturday, December 27, 2014

Using Oracle Weblogic for building Enterprise Portal functions

Modern enterprises more and more demand from their IT organizations to take the role of a service provider, provide them services with a minimum of lead-time and on a pay-per-use model. Enterprise business users and even the IT organizations do have a growing desire to be able to request services, in the widest sense of the word, directly in a self-service manner. Modern cloud solutions and especially hybrid cloud solutions provide in potential the answer to this question.

Building a portal to help enterprises answer this question can be done by using many techniques and many architectures. In a recent blogpost at the Capgemini website I launched the first step to create a open blueprint for an Enterprise portal. The solution is not opensource however the architecture is open and free for everyone to use. The solution is based upon Oracle Weblogic and other Oracle components. The intention is to create a set of posts over time to expand on this subject.


The full article, named "The need for Enterprise Self Service Portals" can be found at the Oracle blog at capgemini.com among other articles I wrote on this website.

Friday, November 07, 2014

Enabling parallel DML on Oracle Exadata

When using Oracle Exadata you can make use of parallelism on a number of levels and within a number of processes. However, when doing an massive data import the level of parallelism might be a bit disappointing at first. Reason for this is that by default not all parallel options are activated. When you do a data import you do want parallel DML (Data Manipulation Language) to be enabled. 

You can check the current setting of parallel DML by querying V$SESSION for PDML_STATUS and PDML_ENABLED as an example you can see the query below

SELECT pq_status, pdml_satus, pddl_status, pdml_enabled FROM v$session WHERE sid = SYS_CONTEXT(‘userenv’,’sid’);

this will give you the overview of the current settings applied on your session. If you find that PDML_STATUS = DISABLED and PDML_ENABLED = NO then you can change this by executing an alter session as shown below:

ALTER SESSION ENABLE PARALLEL DML;


when you rerun the above query you should now see that PDML_STATUS = ENABLED and PDML_ENABLED = YES. Now you have set this flags correct you can provide hints to your statements to ensure you will make optimal use of parallelism. Do note that only enabling parallel DML is not solving all your issues, you will still have to look at the code you will be using during your load process of the data into the Exadata. 

Wednesday, November 05, 2014

Configure Exadata storage cell mail notification

The Oracle Exadata database machine from Oracle gets a lot its performance not specifically from the compute nodes or the infiniband switches, one of the main game changers are the storage cells used within the Exadata. The primary way, for command line people, to interact with the storage cells if the “cell command line interface” or commonly called the CellCLI.

When you like to ensure you storage cell is informing you via mail notifications you can configure or alter the configuration of the storage cell email notification using the CellCLI command.

When doing changes to the configuration you do want to first know the current configuration of a storage cell. You can execute the following command to request the current configuration

CellCLI> list cell detail

For mail configuration you will primarily want to look at:

  • smtpServer
  • smtpFromAddr
  • smtpFrom
  • smtpToAddr
  • notificationMethod

To test the current configuration you can have the storage cell send a test mail notification to see if the current configuration works like you expect. The VALIDATE MAIL operation sends a test message using the e-mail attributes configured for the cell. You can use the below CellCLI command for this:

CellCLI> ALTER CELL VALIDATE MAIL

If you want to change something you can use the “ALTER CELL” option from the CellCLI. As an example we use the below command to set all information for a full configuration to ensure that your Exadata storage cell will send you mail notifications. As you can see it will also send SNMP notifications however the full configuration for SNMP is not shown in this example.

CellCLI> ALTER CELL smtpServer='smtp0.seczone.internal.com',            -
                    smtpFromAddr='huntgroup@internal.com',         -
                    smtpFrom='Exadata-Huntgroup',                         -
                    notificationMethod='mail,snmp'

Wednesday, October 22, 2014

Zero Data Loss Recovery Appliance

During Oracle Openworld 2014 Oracle released the Zero Data Loss Recovery Appliance as one of the new Oracle Engineered Systems. The Zero Data Loss Recovery Appliance is an Oracle Engineered System specifically designed to address backup and recovery related challenges in the modern database deployments. It is specifically designed to ensure that a customer can always perform a point in time recovery in an always on economy where downtime result directly in loss of revenue en the loss of data can potentially result in bankrupting the enterprise.

According to the Oracle documentation the key features of the Zero Data Loss Recovery Appliance are:
  • Real-time redo transport
  • Secure replication
  • Autonomous tape archival
  • End-to-end data validation
  • Incremental-forever backup strategy
  • Space-efficient virtual full backups
  • Backup operations offload
  • Database-level protection policies
  • Database-aware space management
  • Cloud-scale architecture
  • Unified management and control 
According to the same documentation the key benefits of the Zero Data Loss Recovery Appliance are:
  • Eliminate Data Loss
  • Minimal Impact Backups
  • Database Level Recoverability
  • Cloud-scale Data Protection
Even though the Zero Data Loss Recovery Appliance brings some nice features and the key benefits and key features Oracle states in the documentation are very valid the main point is not broadcasted in the documentation. The mentioned points are in many enterprises already available in the form of self build solutions based upon a number of solutions from vendors. Backup software is in most cases Oracle RMAN or a combination of Oracle RMAN and a third vendor software solution. Hardware is commonly from different vendors, a vendor for the server hardware, a vendor for the storage and a vendor for the tape appliances.

One of the main benefits of introducing the Zero Data Loss Recovery Appliance is that it provides the perfect leverage to ensure that all backup and recovery strategies are standardized and optimized in an Oracle best practice manner. In most enterprise deployments you still see that backup and recovery strategies differ over a wide Oracle database deployment landscape.

It is not unseen that backup and recovery strategies involves multiple teams and multiple tools and scripts and that multiple ways of implementation are used over time. By not having an optimized and standardized solution for backup and recovery organizations do not have the ability to have an enterprise wide insight in how well the data is protected against data loss and a uniform way of working for recovery is missing. This introduces the risk that data is lost due to missed backups or due to a non compatible way of restoring.

In the below diagram a dual datacenter solution for Zero Data Loss Recovery Appliance is shown in which it is connected to a Oracle Exadata machine. However, all databases regardless of the server platform they are deployed on can be connected to the Zero Data Loss Recovery Appliance.


When operating a large enterprise wide Oracle landscape customers do use Oracle Enterprise Manager for full end-to-end monitoring and management. One of the additional benefits of the Zero Data Loss Recovery Appliance is that it can fully be managed by Oracle Enterprise Manager. This means that the complete management of all components is done via Oracle Enterprise Manager. This in contrast to home grown solutions where customers are in some cases forced to use management tooling for all the different hardware and software components that make the full backup and recovery solution.

For more information about the Zero Data Loss Recovery Appliance please also refer to the below shown presentation.