Thursday, May 28, 2015

Exalogic based Big Data Strategy

I recently published a whitepaper on Exalogic based Big Data Strategy which go's primarily into how you can capture the data from, as an example, sensors. Most big data strategies go into how you can handle the data as soon as it lands inside you Hadoop cluster. There is however also a need for a clear strategy on how you capture the data before you can use it.

Not having a clear strategy on capturing data to be used within a wider big data strategy can be killing for a big data project. This paper go's into how you can use Oracle Exalogic in this process to ensure that you have a flexible and well performing solution for data acquisition.

You can find the article at the Capgemini website or you can read it below.

Oracle building blocks for future enterprise services

As we observe the direction enterprises are heading into with regards of their IT footprint we can observe a number of interesting trends. None of them are new, however, we see them picking up more and more momentum and becoming the new standard within enterprise IT. If we take a look at some of the directions enterprises are moving into and what the demands are from the internal users in the form of business departments we see the challenges faced.

The questions asked by the business are in some cases against the traditional way of working and doing things. To be able to implement them and satisfy the business some radical change is needed in some cases. Not only in the way IT departments work, also in the way the entire IT landscape is architected and how the entire IT landscape traditionally is build.

To be able to move from a traditional way of working, in most cases, a combination of application as well as infrastructure modernization and rationalization is needed.

To read the full blogpost please visit the Capgemini.com Oracle blog.

Tuesday, April 14, 2015

Calculate DOM0 memory size

When using Oracle VM server as a virtualization platform you will have to ensure that DOM0 has enough memory allocated to it. Dom0 is the initial domain started by the Xen hypervisor on boot. Dom0 is an abbrevation of "Domain 0" (sometimes written as "domain zero" or the "host domain"). Dom0 is a privileged domain that starts first and manages the DomU unprivileged domains.

To ensure the correct amount of memory is allocated to DOM0 Oracle propagates the below mentioned algorithm to be used:

dom0_mem = 502 + int(physical_mem * 0.0205)

As an example this would mean that a server with 2 GB of physical memory would need 543MB of memory allocated to DOM0. A server with 32 GB of physical memory would need 1173MB of memory allocated to DOM0..

To change the Dom0 memory allocation, edit the /boot/grub/grub.conf file on the Oracle VM Server and change the dom0_mem parameter, for example, to change the memory allocation to 1024 MB, edit the file to be:

kernel /xen.gz console=com1,vga com1=38400,8n1 dom0_mem=1024M

Saturday, February 28, 2015

Using XenStore in Oracle VM

When you are developing a private cloud based upon the Oracle portfolio you will most likely make use of Oracle VM for your non-Sparc deployments. It is good to know that Oracle VM is based upon the opensource Xen Hypervisor developed by the Xen project.

The Xen Project hypervisor is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). The Xen Project hypervisor is the only type-1 hypervisor that is available as open source. It is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances. The Xen Project hypervisor is powering the largest clouds in production today.

The above statement from the Xen project makes the statement that it is powering some of the largest clouds in production today which is a good thing to know if you are using Oracle VM. This means that the code adopted within Oracle VM is also empowering numerous other clouds and very large clouds for that matter.

To be able to make full use of the Xen part that makes Oracle VM it is advisable to start understanding how Xen by itself works. Reason for this is that Oracle has not adopted, or documented all features from Xen in its full extend while they are still largely available for you to use.

Even though Oracle has provided a great implementation of the Xen hypervisor and the tooling provided, especially with the integration of Oracle Enterprise Manager, is making life easy there are moments you want to do more than the standard implementation allows you. One of my recent experiences where this was the case was in relation to communication to XenStore. In basics XenStore is a shared storage space between the different domains running on the hypervisor. XenStore is maintained by the xentstored deamon in Dom0 and the operating systems running on the DomU’s can communicate with it via the XenBus.

Even though the default way of communicating to the XenStore is via a number of commands which do not require that you need to know the exact location of the XenStore data the data is actually located in a file. You can find the XenStore data file /var/lib/xenstored where the file is located and named tdb on Dom0. The name TDB stands for Tree Database.

One of the things that enables XenStore you to do is to retrieve information from the XenStore. When you are building more advanced deployment models you can, for example, store information in the XenStore and read this when you boot a guest for the first time and use this input in the configuration process. Also, having access to the XenStore from your guest VM can help you build better reports from the Guest point of view. Gregory Guillou wrote a great blogpost on this subject, the reason he was interested in using the XenStore was to be able to find the relation of a disk presented to a VM and the underlying storage from a VM guest point of view. This could help him to write additional code to do a snapshot from storage for which he needs to have the information about the underlying storage.

Compiling the XenStore tools
Oracle is not providing a RPM for the XenStore tools however you are able to download the Oracle VM sourcecode which contains the source code of the XenStore tools that you can then compile yourself. Or, when you need to do this often you can create a RPM yourself so you can easily distribute the XenStore tools to your guest VM’s.

Once you have downloaded the Oracle VM source code from the Oracle Download site you should open the .iso file and locate the correct source RPM in the SRPMS directory. For the version I am currently running this is xen-4.1.3-25.el5.94.src.rpm however naming can differ per version. Upload the .rpm file to the guest VM and install it.

[root@test1 ~]# rpm -ivh xen-4.1.3-25.el5.94.src.rpm
warning: xen-4.1.3-25.el5.94.src.rpm: Header V3 DSA/SHA1 Signature, key ID 1e5e0159: NOKEY
   1:xen                    ########################################### [100%]
[root@test1 ~]#

This should have provided you a new directory under your account named /rpmbuild/SOURCES in our case this is /root/rpmbuild/SOURCES which contains a large set of files. The only file we are interested in is the file that contains the source code to be used to compile the XenStore tools. In our case this is xen-4.1.3-ovs.tar.gz

[root@test1 SOURCES]# cd /root/rpmbuild/SOURCES/
[root@test1 SOURCES]#
[root@test1 SOURCES]# tar -zxvf xen-4.1.3-ovs.tar.gz

The above extracts the sources we need (and others) and we can go into the location where the sourcecode for the XenStore tools is located and make the code. However before you can make it you have to ensure you have some prerequisites completed that will have to be there before you can compile.  The below yum install command will install all required prerequisites if they are not installed yet. The below has been tested on Oracle Linux 6 and has been extended with gettext and patch based upon the information from the blog written by Gregory Guillou on this subject.

yum install oracle-rdbms-server-11gR2-preinstall libuuid-devel openssl-devel ncurses-devel dev86 iasl python-devel SDL-devel gettext patch

As soon as you have ensured that the prerequisites are all in place you can start the make command in the right directory.

[root@test1 tools]# cd /root/rpmbuild/SOURCES/xen-4.1.3-ovs/tools
[root@test1 tools]# make 

When done without any errors or warnings you can now use this to install the XenStore tools onto your guest VM.

[root@test1 tools]# cd /root/rpmbuild/SOURCES/xen-4.1.3-ovs/tools/misc
[root@test1 tools]#
[root@test1 misc]# install xen-detect /usr/local/bin
[root@test1 xenstore]# cd /root/rpmbuild/SOURCES/xen-4.1.3-ovs/tools/xenstore
[root@test1 xenstore]#
[root@test1 xenstore]# install libxenstore.so.3.0 /usr/local/lib
[root@test1 xenstore]# install xenstore xenstore-control /usr/local/bin
[root@test1 xenstore]#
[root@test1 xenstore]# cd /usr/local/bin
[root@test1 bin]#
[root@test1 bin]# ln -f xenstore xenstore-chmod
[root@test1 bin]# ln -f xenstore xenstore-exists
[root@test1 bin]# ln -f xenstore xenstore-list
[root@test1 bin]# ln -f xenstore xenstore-ls
[root@test1 bin]# ln -f xenstore xenstore-read
[root@test1 bin]# ln -f xenstore xenstore-rm
[root@test1 bin]# ln -f xenstore xenstore-write
[root@test1 bin]#

This in basics should have compiled and installed the XenStore tooling into your Guest VM. As you can see, there is a reason why you should want to create your own RPM in case you need to install this on more than one machine. However, the above gives you a good insight and starting point to build your own RPM.

Using the XenStore tools
Before you can use the XenStore tools you will have to set the correct path for LD_LIBRARY_PATH and you will have to mount Xen File System.

[root@test1 local]# export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
[root@test1 local]# mount -t xenfs none /proc/xen

Once you have done this you can use the XenStore tools. In basics you have xenstore-chmod, xenstore-exists, xenstore-list, xenstore-ls, xenstore-read, xenstore-rm and xenstore-write to your exposal.

Oracle Linux disable selinux

Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including United States Department of Defense–style mandatory access controls (MAC). In some cases running SELinux is a good idea and should certainly be included in server builds for certain customers. However, in some cases it can also be a very big and troublesome security feature you  do not want to be active.

One of the cases where you do not want to have SELinux active is a development or research box. In this case I use the term research box for the machine you might have for yourself to play with and try new things. I do, as most people working with Oracle Linux a lot, have a large set of research boxes in a virtualized manner as well as development machines. All used for a specific task of project and when the project is done the installation is removed.

When you use a machine for this you do most likely do not want to have SELinux in place. Disabling SELinux is a small task.

SELinux is enabled via the configuration file: /etc/selinux/config to disable it simply ensure SELINUX=disabled is set in this file. This should turn of SELinux completely for your Oracle Linux installation.

As stated, SELinux is not a bad choice in secure environments, it can only be a hinder when you are testing new functionality. Understanding SELinux and using it in the right way can be a learning curve, to understand the benefits of SELinux it is good to have a look at the below video which provides an introduction.

Tuesday, January 06, 2015

Compile Google Protocol Buffers On Oracle Linux

Google has released a lot of code as open source software. Free for download and free to use under different open source licenses. One of the software packages released is protobuf, protobuf is google's data interchange format. Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages – Java, C++, or Python.

When trying to compile RethinkDB on a Linux system you will notice that Protobuf is a prerequisite  and needs to be available. In general this is not an issue for most Linux distributions as you are able to download installation packages for it. However, when you do want to install it for Oracle Linux you will find that it is not available for Oracle Linux which leaves you with the source to compile which you need to compile to be able to use it.

As compiling is something which is generally not done by standard users and as Protobuf is something used primarily by developers do not expect everything to be as simple as you think. Some code hacking might be required to get the Google code working on your Oracle Linux machine.

During the first attempt I encountered a bug stating:
configure.ac:57: error: possibly undefined macro: AM_PROG_AR

After some hacking in the code it became clear that line 57 in configure.ac can be commented out when you encounter this issue. For some reason the macro is called and checked however not used in the rest of the code. After commenting out the line you can make and install the code without any issue.

Steps needed to get Protobuf working on Oracle Linux are the following:

1) Get the sourcecode from github.
$ git clone https://github.com/google/protobuf

2.a) Run autogen to automatically generate the config script
$ ./autogen.sh

2.b) If you run into the AM_PROG_AR issue open the configure.ac file and comment out the associated line

3)
run configure to prepare for make
$ ./configure

4)
Make the source code
$ make

5) check the make results
$ make check

6) install Protobuf
$ make install

By now you should have a completed installation of Google Protobuf. You will now be able to use it, in my case to compile the source from the RethinkDB project to create a RethinkDB instance on my Oracle Linux server.

Monday, January 05, 2015

Compile node.js on Oracle Linux

On a frequent basis new development techniques are developed, some get adopted and some do not. New programming languages are introduced and some become popular and some die in the first or second stage of their existence. Recently Node.js has been introduced and has seen a rapid acceptance, specially with back-end developers for mobile applications and web-based platforms who do have the need for strong API based solutions.

Node.js is an open source, cross-platform runtime environment for server-side and networking applications. Node.js applications are written in JavaScript, and can be run within the Node.js runtime on OS X, Microsoft Windows, Linux and FreeBSD.

Node.js provides an event-driven architecture and a non-blocking I/O API that optimizes an application's throughput and scalability. These technologies are commonly used for real-time web applications.

Installing Node.js can be done, for a number of Linux distributions via a package manager. Packages have been provided and installation is a painless task. For other distributions the installation of Node.js is a bit more complex as their is not a mainstream installation package available. For Oracle Linux their is no Node.js mainstream RPM available, this means that you will need to compile Node.js from source to make it available on your Oracle Linux server.

Ensuring you can use Node.js on your Oracle Linux server will take the following steps to complete:

1) Download the sourcecode
wget http://nodejs.org/dist/v0.10.35/node-v0.10.35.tar.gz

2) extract the downloaded sourcecode
tar zxf node-v0.10.35.tar.gz

3) decend into the new directory
cd node-v0.10.35

4) configure before compiling
./configure

5) make and install node.js
sudo make install

This should give you a fresh compiled version of Node.js on Oracle Linux. You can check if your compilation has succeeded by executing node with the --version option.

Sunday, December 28, 2014

Oracle Enterprise Metadata Management for Big Data

When talking about big data the term data lake is often used, the term is originally introduced by James Dixon, Pentaho CTO. The term refers to gathering all available data so it can be used in a big data strategy. By introducing this term James Dixon was correct and part of collecting all data can be part of your big data strategy. However, there is a need to ensure your data lake is not turning into a data swamp. Gartner states some warning on the data lake approach in the “Gartner Says Beware of the Data Lake Fallacy” post on the Gartner website.

In my recent post on the Capgemini website I go into details on Oracle Enterprise Metadata Management and other Oracle products that can be used in combination with the Capgemini Big Data approach to ensure enterprises do get the best possible benefits from implementing a Big Data strategy.


Capgemini promotes a flow which includes Acquisition, Marshalling, Analysis and Action steps all supported by a Master Data Management & Data Governance solution.

Saturday, December 27, 2014

Using Oracle Weblogic for building Enterprise Portal functions

Modern enterprises more and more demand from their IT organizations to take the role of a service provider, provide them services with a minimum of lead-time and on a pay-per-use model. Enterprise business users and even the IT organizations do have a growing desire to be able to request services, in the widest sense of the word, directly in a self-service manner. Modern cloud solutions and especially hybrid cloud solutions provide in potential the answer to this question.

Building a portal to help enterprises answer this question can be done by using many techniques and many architectures. In a recent blogpost at the Capgemini website I launched the first step to create a open blueprint for an Enterprise portal. The solution is not opensource however the architecture is open and free for everyone to use. The solution is based upon Oracle Weblogic and other Oracle components. The intention is to create a set of posts over time to expand on this subject.


The full article, named "The need for Enterprise Self Service Portals" can be found at the Oracle blog at capgemini.com among other articles I wrote on this website.

Friday, November 07, 2014

Enabling parallel DML on Oracle Exadata

When using Oracle Exadata you can make use of parallelism on a number of levels and within a number of processes. However, when doing an massive data import the level of parallelism might be a bit disappointing at first. Reason for this is that by default not all parallel options are activated. When you do a data import you do want parallel DML (Data Manipulation Language) to be enabled. 

You can check the current setting of parallel DML by querying V$SESSION for PDML_STATUS and PDML_ENABLED as an example you can see the query below

SELECT pq_status, pdml_satus, pddl_status, pdml_enabled FROM v$session WHERE sid = SYS_CONTEXT(‘userenv’,’sid’);

this will give you the overview of the current settings applied on your session. If you find that PDML_STATUS = DISABLED and PDML_ENABLED = NO then you can change this by executing an alter session as shown below:

ALTER SESSION ENABLE PARALLEL DML;


when you rerun the above query you should now see that PDML_STATUS = ENABLED and PDML_ENABLED = YES. Now you have set this flags correct you can provide hints to your statements to ensure you will make optimal use of parallelism. Do note that only enabling parallel DML is not solving all your issues, you will still have to look at the code you will be using during your load process of the data into the Exadata. 

Wednesday, November 05, 2014

Configure Exadata storage cell mail notification

The Oracle Exadata database machine from Oracle gets a lot its performance not specifically from the compute nodes or the infiniband switches, one of the main game changers are the storage cells used within the Exadata. The primary way, for command line people, to interact with the storage cells if the “cell command line interface” or commonly called the CellCLI.

When you like to ensure you storage cell is informing you via mail notifications you can configure or alter the configuration of the storage cell email notification using the CellCLI command.

When doing changes to the configuration you do want to first know the current configuration of a storage cell. You can execute the following command to request the current configuration

CellCLI> list cell detail

For mail configuration you will primarily want to look at:

  • smtpServer
  • smtpFromAddr
  • smtpFrom
  • smtpToAddr
  • notificationMethod

To test the current configuration you can have the storage cell send a test mail notification to see if the current configuration works like you expect. The VALIDATE MAIL operation sends a test message using the e-mail attributes configured for the cell. You can use the below CellCLI command for this:

CellCLI> ALTER CELL VALIDATE MAIL

If you want to change something you can use the “ALTER CELL” option from the CellCLI. As an example we use the below command to set all information for a full configuration to ensure that your Exadata storage cell will send you mail notifications. As you can see it will also send SNMP notifications however the full configuration for SNMP is not shown in this example.

CellCLI> ALTER CELL smtpServer='smtp0.seczone.internal.com',            -
                    smtpFromAddr='huntgroup@internal.com',         -
                    smtpFrom='Exadata-Huntgroup',                         -
                    notificationMethod='mail,snmp'

Wednesday, October 22, 2014

Zero Data Loss Recovery Appliance

During Oracle Openworld 2014 Oracle released the Zero Data Loss Recovery Appliance as one of the new Oracle Engineered Systems. The Zero Data Loss Recovery Appliance is an Oracle Engineered System specifically designed to address backup and recovery related challenges in the modern database deployments. It is specifically designed to ensure that a customer can always perform a point in time recovery in an always on economy where downtime result directly in loss of revenue en the loss of data can potentially result in bankrupting the enterprise.

According to the Oracle documentation the key features of the Zero Data Loss Recovery Appliance are:
  • Real-time redo transport
  • Secure replication
  • Autonomous tape archival
  • End-to-end data validation
  • Incremental-forever backup strategy
  • Space-efficient virtual full backups
  • Backup operations offload
  • Database-level protection policies
  • Database-aware space management
  • Cloud-scale architecture
  • Unified management and control 
According to the same documentation the key benefits of the Zero Data Loss Recovery Appliance are:
  • Eliminate Data Loss
  • Minimal Impact Backups
  • Database Level Recoverability
  • Cloud-scale Data Protection
Even though the Zero Data Loss Recovery Appliance brings some nice features and the key benefits and key features Oracle states in the documentation are very valid the main point is not broadcasted in the documentation. The mentioned points are in many enterprises already available in the form of self build solutions based upon a number of solutions from vendors. Backup software is in most cases Oracle RMAN or a combination of Oracle RMAN and a third vendor software solution. Hardware is commonly from different vendors, a vendor for the server hardware, a vendor for the storage and a vendor for the tape appliances.

One of the main benefits of introducing the Zero Data Loss Recovery Appliance is that it provides the perfect leverage to ensure that all backup and recovery strategies are standardized and optimized in an Oracle best practice manner. In most enterprise deployments you still see that backup and recovery strategies differ over a wide Oracle database deployment landscape.

It is not unseen that backup and recovery strategies involves multiple teams and multiple tools and scripts and that multiple ways of implementation are used over time. By not having an optimized and standardized solution for backup and recovery organizations do not have the ability to have an enterprise wide insight in how well the data is protected against data loss and a uniform way of working for recovery is missing. This introduces the risk that data is lost due to missed backups or due to a non compatible way of restoring.

In the below diagram a dual datacenter solution for Zero Data Loss Recovery Appliance is shown in which it is connected to a Oracle Exadata machine. However, all databases regardless of the server platform they are deployed on can be connected to the Zero Data Loss Recovery Appliance.


When operating a large enterprise wide Oracle landscape customers do use Oracle Enterprise Manager for full end-to-end monitoring and management. One of the additional benefits of the Zero Data Loss Recovery Appliance is that it can fully be managed by Oracle Enterprise Manager. This means that the complete management of all components is done via Oracle Enterprise Manager. This in contrast to home grown solutions where customers are in some cases forced to use management tooling for all the different hardware and software components that make the full backup and recovery solution.

For more information about the Zero Data Loss Recovery Appliance please also refer to the below shown presentation.



Wednesday, October 08, 2014

Oracle Enterprise Manager, Metering and Chargeback

When discussing IT with the business side of an enterprise the general opinion is that IT departments should, among other things, be like a utility company. Providing services to the business so they can accelerate in what they do. Not dictating how to do business but providing the services needed, when, where and how the business needs them. This point of view is a valid one, and is fueled by the rise of cloud and the Business Driven IT Management paradigm.

The often forgotten, overlooked or deliberately ignored part of viewing your IT department as a utility company is that the consumer is charged based upon consumption. In many enterprise, large and small the funding of the IT department and the funding of projects is done based a perceptual charge of the budget a department receives in the annual budget. The consolidated value of the IT share of all departments is the budget for the IT department. Even though this seems to be reasonable fare way it is in many cases an unfair distribution of IT costs.

When business departments consider the IT department as a service organization in the way they consider utility companies a service organization the natural evolutionary step is that the IT department will invoice the business departments based upon usage. By transforming the financial funding from the IT department from a budget income to a commercial gained income a number of things will be accomplished:


  • Fare distribution of IT costs between business departments
  • Forcing IT departments to become more effective
  • Forcing IT departments to become more innovative
  • Forcing IT departments to become more financially aware


One of the foundations of this strategy is that the IT department will be able to track the usage of systems. As companies are moving to cloud based solutions, implementing systems in private clouds and public clouds this is providing the ideal moment in time to go to pay-per-use model.

When using Oracle Enterprise Manager as part of your cloud, as for example used in the blog post “The future of the small cloud” you can also use the “metering and chargeback” options that are part of the cloud foundation of Oracle Enterprise Manager. Oracle Enterprise Manager allows you to monitor the usages of assets that you define, for a price per time unit you define. When deploying the metering and chargeback solution within Oracle Enterprise Manager the implementation models to calculate the price per time unit for your internal departments are virtually endless.
The setup of metering and chargeback focuses around defining charge plans and assigning the charge plans to specific internal customers and cost centers.


The setup of the full end-to-end solution will take time, time to setup the technical side of things as shown as an example screenshot below. However, the majority of the time you will need to spend is to identify and calculate what the exact price for an item should be. This should include all the known costs and hidden costs IT departments have before it is delivered to internal customers. For example, housing, hosting, management, employees, training, licenses, etc etc. This all should be calculated into the price per item per time unit. This is a pure financial calculation that needs to be done.


Even though metering and chargeback is a part of the Oracle Enterprise Manager solution in reality it is in most companies used as a metering and showback solution to inform internal departments about the costs. A next step is for companies currently using metering and showback within Oracle Enterprise Manager is to really bill internal departments based upon consumption. This however is more an internal mind change then a technological implementation.

Implementing “metering and chargeback” is a solution that is needed more and more in modern enterprise. Not purely from a technical point of view but rather more from a business model modernization point of view. By implementing Oracle Enterprise Manager as the central management and monitoring solution and include the “metering and chargeback” options modern enterprises get a huge benefit out of the box and have a direct benefit.

Manage all databases with one tool

When Oracle acquired Sun a lot of active MySQL users have been wondering in which direction the development of MySQL would go to. Oracle has been developing and expanding the functionality of the MySQL database continuously since the acquisition. The surprising part of the MySQL database has been that the integration with Oracle Enterprise Manager has not been developed. Up until now, during Oracle OpenWorld 2014, Oracle announced the launch of the Oracle Enterprise Manager plugin for the MySQL enterprise edition.

A non-official version of a MySQL plugin has already been around for some time, however, the launch of the official MySQL plugin is significant. Not especially from a new technological point of view but rather from an integration and management point of view the introduction of the MySQL plugin is considered important.

The majority of the enterprises who do host Oracle databases do also host Oracle MySQL databases in their IT infrastructure. The statement that MySQL is only used for small databases and small deployments is incorrect as an example Facebook runs tens of thousands of MySQL servers and a typical instance is 1 to 2 TB. As companies do implement and use Oracle databases and most likely Oracle middleware they do have the need for central management by using Oracle Enterprise Manager 12C. Possibly use it to improve the day to day operations and maintenance of the landscape, possibly to use a cloud based approach to IT management.

Prior to the launch of the Oracle Enterprise Manager MySQL plugin launch companies where forced to use out of band management tooling for day to day operations.


With the introduction of the Oracle Enterprise Manager MySQL plugin you can now incorporate the management of the MySQL databases into your Oracle Enterprise Manager. This will provide you a single point of management and monitoring resulting directly in a more and better managed IT landscape and a quick return on investment due to better management.


On a high level the new Oracle Enterprise Manager MySQL plugin provides the following features:

  • MySQL Performance Monitoring
  • MySQL Availability Monitoring
  • MySQL Metric Collection
  • MySQL Alerts and Notifications
  • MySQL Configuration Management
  • MySQL Reports
  • MySQL Remote Monitoring


In general the use of the MySQL plugin for Oracle Enterprise Manager provides you the option to unify the monitoring and management of all your MySQL and Oracle databases in one tool which will result in better management, improved service availability and stability as well as a reduction in cost due to centralizing tooling and mitigating against unneeded downtime.

Sunday, September 28, 2014

The future of the small cloud

When talking about cloud, immediately the thoughts of Amazon, Azure and Oracle Cloud comes to mind by a lot of people. When talking about private cloud the general idea comes to mind that this is a model which is valuable for large customers running hundreds or thousands of environments and which will require a large investment in hardware, software, networking and human resources to deploy a private cloud solution.

Even though the public cloud provides a lot of benefits and relieves companies from CAPEX costs in some cases it is beneficial to create a private cloud. This is not only the case for large enterprises running thousands of services, this is also the case for small companies. Some of the reasons that a private cloud is more applicable then using a public cloud can be for example:

Legal requirements
Compliancy rules and regulations
Confidentiality of data and/or source code
Specific needs around control beyond the possibilities of public cloud
Specific needs around performance beyond the possibilities of public cloud
Specific architectural and technical requirements beyond the possibilities of public cloud

There are more specific reasons that a private cloud, or hybrid cloud, for small companies can be more beneficial than a public cloud and can be determined on a case by case base. Capgemini provides roadmap architecture services to support customers in determining the best solution for a specific case which can be public cloud, private cloud or a mix of both in the form of a hybrid cloud. This is next to more traditional solutions that are still very valid in many cases for customers.

One of the main misconceptions around private cloud is that it is considered to be only valid for large deployments and large enterprises. The general opinion is that there is the need for a high initial investment in hardware, software and knowledge. As stated this is a misconception. By using both Oracle hardware and software there is an option to build a relative low cost private cloud which can be managed for a large part from a central graphical user interface in the form of Oracle Enterprise Manager.

A private cloud can be started by a simple deployment of two or more Sun X4-* servers and using Oracle VM as a hypervisor for the virtualization. This can be the starting point for a simple self-service enabled private cloud where departments and developers can provision systems in an infrastructure as a service manner or provision databases and middleware in the same fashion.


By using the above setup in combination with Oracle Enterprise Manager you can have a simple private cloud up and running in a matter of days. This will enable your business or your local development teams to make use of a private in-house cloud where they can make use of self service portals to deploy new virtual machines or databases / applications in a matter of minutes based upon templates provided by Oracle.