Tuesday, January 06, 2015

Compile Google Protocol Buffers On Oracle Linux

Google has released a lot of code as open source software. Free for download and free to use under different open source licenses. One of the software packages released is protobuf, protobuf is google's data interchange format. Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages – Java, C++, or Python.

When trying to compile RethinkDB on a Linux system you will notice that Protobuf is a prerequisite  and needs to be available. In general this is not an issue for most Linux distributions as you are able to download installation packages for it. However, when you do want to install it for Oracle Linux you will find that it is not available for Oracle Linux which leaves you with the source to compile which you need to compile to be able to use it.

As compiling is something which is generally not done by standard users and as Protobuf is something used primarily by developers do not expect everything to be as simple as you think. Some code hacking might be required to get the Google code working on your Oracle Linux machine.

During the first attempt I encountered a bug stating:
configure.ac:57: error: possibly undefined macro: AM_PROG_AR

After some hacking in the code it became clear that line 57 in configure.ac can be commented out when you encounter this issue. For some reason the macro is called and checked however not used in the rest of the code. After commenting out the line you can make and install the code without any issue.

Steps needed to get Protobuf working on Oracle Linux are the following:

1) Get the sourcecode from github.
$ git clone https://github.com/google/protobuf

2.a) Run autogen to automatically generate the config script
$ ./autogen.sh

2.b) If you run into the AM_PROG_AR issue open the configure.ac file and comment out the associated line

3)
run configure to prepare for make
$ ./configure

4)
Make the source code
$ make

5) check the make results
$ make check

6) install Protobuf
$ make install

By now you should have a completed installation of Google Protobuf. You will now be able to use it, in my case to compile the source from the RethinkDB project to create a RethinkDB instance on my Oracle Linux server.

Monday, January 05, 2015

Compile node.js on Oracle Linux

On a frequent basis new development techniques are developed, some get adopted and some do not. New programming languages are introduced and some become popular and some die in the first or second stage of their existence. Recently Node.js has been introduced and has seen a rapid acceptance, specially with back-end developers for mobile applications and web-based platforms who do have the need for strong API based solutions.

Node.js is an open source, cross-platform runtime environment for server-side and networking applications. Node.js applications are written in JavaScript, and can be run within the Node.js runtime on OS X, Microsoft Windows, Linux and FreeBSD.

Node.js provides an event-driven architecture and a non-blocking I/O API that optimizes an application's throughput and scalability. These technologies are commonly used for real-time web applications.

Installing Node.js can be done, for a number of Linux distributions via a package manager. Packages have been provided and installation is a painless task. For other distributions the installation of Node.js is a bit more complex as their is not a mainstream installation package available. For Oracle Linux their is no Node.js mainstream RPM available, this means that you will need to compile Node.js from source to make it available on your Oracle Linux server.

Ensuring you can use Node.js on your Oracle Linux server will take the following steps to complete:

1) Download the sourcecode
wget http://nodejs.org/dist/v0.10.35/node-v0.10.35.tar.gz

2) extract the downloaded sourcecode
tar zxf node-v0.10.35.tar.gz

3) decend into the new directory
cd node-v0.10.35

4) configure before compiling
./configure

5) make and install node.js
sudo make install

This should give you a fresh compiled version of Node.js on Oracle Linux. You can check if your compilation has succeeded by executing node with the --version option.

Sunday, December 28, 2014

Oracle Enterprise Metadata Management for Big Data

When talking about big data the term data lake is often used, the term is originally introduced by James Dixon, Pentaho CTO. The term refers to gathering all available data so it can be used in a big data strategy. By introducing this term James Dixon was correct and part of collecting all data can be part of your big data strategy. However, there is a need to ensure your data lake is not turning into a data swamp. Gartner states some warning on the data lake approach in the “Gartner Says Beware of the Data Lake Fallacy” post on the Gartner website.

In my recent post on the Capgemini website I go into details on Oracle Enterprise Metadata Management and other Oracle products that can be used in combination with the Capgemini Big Data approach to ensure enterprises do get the best possible benefits from implementing a Big Data strategy.


Capgemini promotes a flow which includes Acquisition, Marshalling, Analysis and Action steps all supported by a Master Data Management & Data Governance solution.

Saturday, December 27, 2014

Using Oracle Weblogic for building Enterprise Portal functions

Modern enterprises more and more demand from their IT organizations to take the role of a service provider, provide them services with a minimum of lead-time and on a pay-per-use model. Enterprise business users and even the IT organizations do have a growing desire to be able to request services, in the widest sense of the word, directly in a self-service manner. Modern cloud solutions and especially hybrid cloud solutions provide in potential the answer to this question.

Building a portal to help enterprises answer this question can be done by using many techniques and many architectures. In a recent blogpost at the Capgemini website I launched the first step to create a open blueprint for an Enterprise portal. The solution is not opensource however the architecture is open and free for everyone to use. The solution is based upon Oracle Weblogic and other Oracle components. The intention is to create a set of posts over time to expand on this subject.


The full article, named "The need for Enterprise Self Service Portals" can be found at the Oracle blog at capgemini.com among other articles I wrote on this website.

Friday, November 07, 2014

Enabling parallel DML on Oracle Exadata

When using Oracle Exadata you can make use of parallelism on a number of levels and within a number of processes. However, when doing an massive data import the level of parallelism might be a bit disappointing at first. Reason for this is that by default not all parallel options are activated. When you do a data import you do want parallel DML (Data Manipulation Language) to be enabled. 

You can check the current setting of parallel DML by querying V$SESSION for PDML_STATUS and PDML_ENABLED as an example you can see the query below

SELECT pq_status, pdml_satus, pddl_status, pdml_enabled FROM v$session WHERE sid = SYS_CONTEXT(‘userenv’,’sid’);

this will give you the overview of the current settings applied on your session. If you find that PDML_STATUS = DISABLED and PDML_ENABLED = NO then you can change this by executing an alter session as shown below:

ALTER SESSION ENABLE PARALLEL DML;


when you rerun the above query you should now see that PDML_STATUS = ENABLED and PDML_ENABLED = YES. Now you have set this flags correct you can provide hints to your statements to ensure you will make optimal use of parallelism. Do note that only enabling parallel DML is not solving all your issues, you will still have to look at the code you will be using during your load process of the data into the Exadata. 

Wednesday, November 05, 2014

Configure Exadata storage cell mail notification

The Oracle Exadata database machine from Oracle gets a lot its performance not specifically from the compute nodes or the infiniband switches, one of the main game changers are the storage cells used within the Exadata. The primary way, for command line people, to interact with the storage cells if the “cell command line interface” or commonly called the CellCLI.

When you like to ensure you storage cell is informing you via mail notifications you can configure or alter the configuration of the storage cell email notification using the CellCLI command.

When doing changes to the configuration you do want to first know the current configuration of a storage cell. You can execute the following command to request the current configuration

CellCLI> list cell detail

For mail configuration you will primarily want to look at:

  • smtpServer
  • smtpFromAddr
  • smtpFrom
  • smtpToAddr
  • notificationMethod

To test the current configuration you can have the storage cell send a test mail notification to see if the current configuration works like you expect. The VALIDATE MAIL operation sends a test message using the e-mail attributes configured for the cell. You can use the below CellCLI command for this:

CellCLI> ALTER CELL VALIDATE MAIL

If you want to change something you can use the “ALTER CELL” option from the CellCLI. As an example we use the below command to set all information for a full configuration to ensure that your Exadata storage cell will send you mail notifications. As you can see it will also send SNMP notifications however the full configuration for SNMP is not shown in this example.

CellCLI> ALTER CELL smtpServer='smtp0.seczone.internal.com',            -
                    smtpFromAddr='huntgroup@internal.com',         -
                    smtpFrom='Exadata-Huntgroup',                         -
                    notificationMethod='mail,snmp'

Wednesday, October 22, 2014

Zero Data Loss Recovery Appliance

During Oracle Openworld 2014 Oracle released the Zero Data Loss Recovery Appliance as one of the new Oracle Engineered Systems. The Zero Data Loss Recovery Appliance is an Oracle Engineered System specifically designed to address backup and recovery related challenges in the modern database deployments. It is specifically designed to ensure that a customer can always perform a point in time recovery in an always on economy where downtime result directly in loss of revenue en the loss of data can potentially result in bankrupting the enterprise.

According to the Oracle documentation the key features of the Zero Data Loss Recovery Appliance are:
  • Real-time redo transport
  • Secure replication
  • Autonomous tape archival
  • End-to-end data validation
  • Incremental-forever backup strategy
  • Space-efficient virtual full backups
  • Backup operations offload
  • Database-level protection policies
  • Database-aware space management
  • Cloud-scale architecture
  • Unified management and control 
According to the same documentation the key benefits of the Zero Data Loss Recovery Appliance are:
  • Eliminate Data Loss
  • Minimal Impact Backups
  • Database Level Recoverability
  • Cloud-scale Data Protection
Even though the Zero Data Loss Recovery Appliance brings some nice features and the key benefits and key features Oracle states in the documentation are very valid the main point is not broadcasted in the documentation. The mentioned points are in many enterprises already available in the form of self build solutions based upon a number of solutions from vendors. Backup software is in most cases Oracle RMAN or a combination of Oracle RMAN and a third vendor software solution. Hardware is commonly from different vendors, a vendor for the server hardware, a vendor for the storage and a vendor for the tape appliances.

One of the main benefits of introducing the Zero Data Loss Recovery Appliance is that it provides the perfect leverage to ensure that all backup and recovery strategies are standardized and optimized in an Oracle best practice manner. In most enterprise deployments you still see that backup and recovery strategies differ over a wide Oracle database deployment landscape.

It is not unseen that backup and recovery strategies involves multiple teams and multiple tools and scripts and that multiple ways of implementation are used over time. By not having an optimized and standardized solution for backup and recovery organizations do not have the ability to have an enterprise wide insight in how well the data is protected against data loss and a uniform way of working for recovery is missing. This introduces the risk that data is lost due to missed backups or due to a non compatible way of restoring.

In the below diagram a dual datacenter solution for Zero Data Loss Recovery Appliance is shown in which it is connected to a Oracle Exadata machine. However, all databases regardless of the server platform they are deployed on can be connected to the Zero Data Loss Recovery Appliance.


When operating a large enterprise wide Oracle landscape customers do use Oracle Enterprise Manager for full end-to-end monitoring and management. One of the additional benefits of the Zero Data Loss Recovery Appliance is that it can fully be managed by Oracle Enterprise Manager. This means that the complete management of all components is done via Oracle Enterprise Manager. This in contrast to home grown solutions where customers are in some cases forced to use management tooling for all the different hardware and software components that make the full backup and recovery solution.

For more information about the Zero Data Loss Recovery Appliance please also refer to the below shown presentation.



Wednesday, October 08, 2014

Oracle Enterprise Manager, Metering and Chargeback

When discussing IT with the business side of an enterprise the general opinion is that IT departments should, among other things, be like a utility company. Providing services to the business so they can accelerate in what they do. Not dictating how to do business but providing the services needed, when, where and how the business needs them. This point of view is a valid one, and is fueled by the rise of cloud and the Business Driven IT Management paradigm.

The often forgotten, overlooked or deliberately ignored part of viewing your IT department as a utility company is that the consumer is charged based upon consumption. In many enterprise, large and small the funding of the IT department and the funding of projects is done based a perceptual charge of the budget a department receives in the annual budget. The consolidated value of the IT share of all departments is the budget for the IT department. Even though this seems to be reasonable fare way it is in many cases an unfair distribution of IT costs.

When business departments consider the IT department as a service organization in the way they consider utility companies a service organization the natural evolutionary step is that the IT department will invoice the business departments based upon usage. By transforming the financial funding from the IT department from a budget income to a commercial gained income a number of things will be accomplished:


  • Fare distribution of IT costs between business departments
  • Forcing IT departments to become more effective
  • Forcing IT departments to become more innovative
  • Forcing IT departments to become more financially aware


One of the foundations of this strategy is that the IT department will be able to track the usage of systems. As companies are moving to cloud based solutions, implementing systems in private clouds and public clouds this is providing the ideal moment in time to go to pay-per-use model.

When using Oracle Enterprise Manager as part of your cloud, as for example used in the blog post “The future of the small cloud” you can also use the “metering and chargeback” options that are part of the cloud foundation of Oracle Enterprise Manager. Oracle Enterprise Manager allows you to monitor the usages of assets that you define, for a price per time unit you define. When deploying the metering and chargeback solution within Oracle Enterprise Manager the implementation models to calculate the price per time unit for your internal departments are virtually endless.
The setup of metering and chargeback focuses around defining charge plans and assigning the charge plans to specific internal customers and cost centers.


The setup of the full end-to-end solution will take time, time to setup the technical side of things as shown as an example screenshot below. However, the majority of the time you will need to spend is to identify and calculate what the exact price for an item should be. This should include all the known costs and hidden costs IT departments have before it is delivered to internal customers. For example, housing, hosting, management, employees, training, licenses, etc etc. This all should be calculated into the price per item per time unit. This is a pure financial calculation that needs to be done.


Even though metering and chargeback is a part of the Oracle Enterprise Manager solution in reality it is in most companies used as a metering and showback solution to inform internal departments about the costs. A next step is for companies currently using metering and showback within Oracle Enterprise Manager is to really bill internal departments based upon consumption. This however is more an internal mind change then a technological implementation.

Implementing “metering and chargeback” is a solution that is needed more and more in modern enterprise. Not purely from a technical point of view but rather more from a business model modernization point of view. By implementing Oracle Enterprise Manager as the central management and monitoring solution and include the “metering and chargeback” options modern enterprises get a huge benefit out of the box and have a direct benefit.

Manage all databases with one tool

When Oracle acquired Sun a lot of active MySQL users have been wondering in which direction the development of MySQL would go to. Oracle has been developing and expanding the functionality of the MySQL database continuously since the acquisition. The surprising part of the MySQL database has been that the integration with Oracle Enterprise Manager has not been developed. Up until now, during Oracle OpenWorld 2014, Oracle announced the launch of the Oracle Enterprise Manager plugin for the MySQL enterprise edition.

A non-official version of a MySQL plugin has already been around for some time, however, the launch of the official MySQL plugin is significant. Not especially from a new technological point of view but rather from an integration and management point of view the introduction of the MySQL plugin is considered important.

The majority of the enterprises who do host Oracle databases do also host Oracle MySQL databases in their IT infrastructure. The statement that MySQL is only used for small databases and small deployments is incorrect as an example Facebook runs tens of thousands of MySQL servers and a typical instance is 1 to 2 TB. As companies do implement and use Oracle databases and most likely Oracle middleware they do have the need for central management by using Oracle Enterprise Manager 12C. Possibly use it to improve the day to day operations and maintenance of the landscape, possibly to use a cloud based approach to IT management.

Prior to the launch of the Oracle Enterprise Manager MySQL plugin launch companies where forced to use out of band management tooling for day to day operations.


With the introduction of the Oracle Enterprise Manager MySQL plugin you can now incorporate the management of the MySQL databases into your Oracle Enterprise Manager. This will provide you a single point of management and monitoring resulting directly in a more and better managed IT landscape and a quick return on investment due to better management.


On a high level the new Oracle Enterprise Manager MySQL plugin provides the following features:

  • MySQL Performance Monitoring
  • MySQL Availability Monitoring
  • MySQL Metric Collection
  • MySQL Alerts and Notifications
  • MySQL Configuration Management
  • MySQL Reports
  • MySQL Remote Monitoring


In general the use of the MySQL plugin for Oracle Enterprise Manager provides you the option to unify the monitoring and management of all your MySQL and Oracle databases in one tool which will result in better management, improved service availability and stability as well as a reduction in cost due to centralizing tooling and mitigating against unneeded downtime.

Sunday, September 28, 2014

The future of the small cloud

When talking about cloud, immediately the thoughts of Amazon, Azure and Oracle Cloud comes to mind by a lot of people. When talking about private cloud the general idea comes to mind that this is a model which is valuable for large customers running hundreds or thousands of environments and which will require a large investment in hardware, software, networking and human resources to deploy a private cloud solution.

Even though the public cloud provides a lot of benefits and relieves companies from CAPEX costs in some cases it is beneficial to create a private cloud. This is not only the case for large enterprises running thousands of services, this is also the case for small companies. Some of the reasons that a private cloud is more applicable then using a public cloud can be for example:

Legal requirements
Compliancy rules and regulations
Confidentiality of data and/or source code
Specific needs around control beyond the possibilities of public cloud
Specific needs around performance beyond the possibilities of public cloud
Specific architectural and technical requirements beyond the possibilities of public cloud

There are more specific reasons that a private cloud, or hybrid cloud, for small companies can be more beneficial than a public cloud and can be determined on a case by case base. Capgemini provides roadmap architecture services to support customers in determining the best solution for a specific case which can be public cloud, private cloud or a mix of both in the form of a hybrid cloud. This is next to more traditional solutions that are still very valid in many cases for customers.

One of the main misconceptions around private cloud is that it is considered to be only valid for large deployments and large enterprises. The general opinion is that there is the need for a high initial investment in hardware, software and knowledge. As stated this is a misconception. By using both Oracle hardware and software there is an option to build a relative low cost private cloud which can be managed for a large part from a central graphical user interface in the form of Oracle Enterprise Manager.

A private cloud can be started by a simple deployment of two or more Sun X4-* servers and using Oracle VM as a hypervisor for the virtualization. This can be the starting point for a simple self-service enabled private cloud where departments and developers can provision systems in an infrastructure as a service manner or provision databases and middleware in the same fashion.


By using the above setup in combination with Oracle Enterprise Manager you can have a simple private cloud up and running in a matter of days. This will enable your business or your local development teams to make use of a private in-house cloud where they can make use of self service portals to deploy new virtual machines or databases / applications in a matter of minutes based upon templates provided by Oracle.

Friday, August 29, 2014

Oracle Database FIPS 140-2 security

With the release of Oracle database 12.1.0.2 Oracle has introduced a new parameter in the database in relation to security. The new parameter DBFIPS_140 is used to ensure your database is secured according to the FIPS 140-2 standards level 2. FIPS 140-2 stands for Federal Information Processing Standard and dictates how data should be encrypted in rest and during transmission.

The fact that Oracle now has the option to activate a parameter in the database which will control that your data will be secured in accordance to FIPS 140-2 level 2 is a huge benefit when deploying databases in government environments demanding FIPS compliancy, however, it can also be used for non government systems as it will show a level of security implemented in your system.

To ensure your entire solution is FIPS compliant in an end-to-end fashion will take more then only activate the DBFIPS_140 parameter in your database however from a database component point of view in the overall solution it is a good thing.

The current DBFIPS_140 parameter is designed to be compliant with FIPS 140-2 level 2. The FIPS 140 standard consist out of 4 levels from which Oracle is currently covering level 2. The overall standard has the following descriptions on the levels within FIPS 140-2:


  • FIPS 140-2 Level 1 the lowest, imposes very limited requirements; loosely, all components must be "production-grade" and various egregious kinds of insecurity must be absent.
  • FIPS 140-2 Level 2 adds requirements for physical tamper-evidence and role-based authentication.
  • FIPS 140-2 Level 3 adds requirements for physical tamper-resistance (making it difficult for attackers to gain access to sensitive information contained in the module) and identity-based authentication, and for a physical or logical separation between the interfaces by which "critical security parameters" enter and leave the module, and its other interfaces.
  • FIPS 140-2 Level 4 makes the physical security requirements more stringent, and requires robustness against environmental attacks


The FIPS 140-2 setting uses the cryptographic libraries which are included in the Oracle database to ensure encryption of the data and are designed to meet the federal requirements for data encryption during rest and during transmission. For this Oracle uses a combination of 3 solutions; a Secure Socket Layer implementation (SSL), Transparent Data Encryption (TDE) and the DBMS_CRYPTO package.

To active the FIPS 140 setting you have to apply the below command and restart your database to ensure the change has taken effect;

ALTER SYSTEM SET DBFIPS_140 = TRUE;

When designing a secure environment for you customer, government or non-government it is however of importance to understand that security takes more then only activating DBFIPS_140. Even if you only take the database into account a real secure Oracle database implementation will take a lot more and will include full separate security architecture.  The Oracle Advanced Security portfolio for databases contains a lot of products, which are pure for the database.


Implementing this, and taking into account you will still need additional security around networking, operating systems, physical location security, client system security and others will take more time then an average secured system. Only securing the database will provide you a secure solution for your database however to ensure true security you will have to apply the same level of masseurs on all components of your secured landscape.

Tuesday, August 05, 2014

Oracle virtualbox Disk UUID issue resolved

Oracle Virtualbox is a desktop virtualization technology used by many developers and system administrators to be able to quickly run a virtual operating system on top of their workstation OS. It is freely available from Oracle and has a widespread adoption. Even though it is a robust solution for running virtual machines on your workstation it can in some situations have some issues. Especially when you change the location of your virtual disks there might be a strange error in the Virtualbox gui.

Due to a running out of diskspace issue I was forced to move some of the virtual disks attached to my virtual machines to another disk on my workstation. Oracle virtualbox allows you to attach (or de-attach) disks to a virtual machine via the GUI. However, if you move a virtual disk to another location and try to re-attach it to a virtual machine the GUI is giving a warning like the one below:


The message reads that virtuabox cannot register the hard disk with a specific UUID because a hard disk with the same UUID is already know. This is due to the fact that virtualbox keep track of virtual disk files with a combination of UUID and location. As you move the file it is seen as a different virtual disk however with the same UUID. Solution for this is to change the UUID in the file so Virtualbox will see it as a new disk and you will be able to attach it to your virtual machine again. On windows (host) systems this can be resolved by executing the below command:



After executing this command you will see that you are able to attach the disk without any issue and can use it again while running at its new location.

Oracle Big Data trends 2014

Oracle has released an insight into the top trends for 2014 in relation to Big Data and Analytics. As can be seen from the 10 points that Oracle sees as trends for 2014 there is a clear focus on big data, predictive analytics and integrating this into existing solutions and processes within an enterprise.


1) Mobile Analytics are on the rise; plans for mobile BI initiatives will double this year.
2) ½ of organizations will move analytics to the cloud for easier reporting
3) ¼ of organizations will unite Hadoop-based data reservoirs with data warehouses as a cost-effective method for long-term storage and in place analysis
4) Organizations will double the number of people with advanced skills in Hadoop and predictive analysis in the coming year
5) 33% of human capital management professionals will use big data discovery tools to explore data from performance reviews, internal surveys, professional profiles and insider workplace websites such as Glassdoor.
6) 40% will prioritize predictive analysis to gain insight into big data strategies
7) 52% will use predictive analytics to gain insight into old business processes
8) 59% will use decision optimization technologies to provide a more personalized and more effective experience for customer interaction.
9) 44% of decision makers will embrace packaged analytics to integrate with existing ERP systems.
10) Organizations still feel their analytics skills are on a beginner level. To keep up they will focus on developing analytic competences.



Saturday, July 26, 2014

Query Big Data with SQL

Data management used to be “easy” within enterprises, in most common cases data lived was stored in files on a file system or it was stored in a relational database. With some small exceptions this was where you where able to find data. With the explosion of data we see today and with the innovation around the question how to handle the data explosion we see a lot more options coming into play. The rise of NoSQL databases and the rise of HDFS based Hadoop solutions places data in a lot more places then only the two mentioned.
Having the option to store data where it is most likely adding the most value to the company is from an architectural point of view a great addition. By having the option for example to not choice for a relational database however store data in a NoSQL database or HDFS file system is giving architects a lot more flexibility when creating an enterprise wide strategy. However, it is also causing a new problem, when you try to combine data this might become much harder. When you store all your data in a relations database you can easily query all the data with a single SQL statement. When parts of your data reside in a relational database, parts in a NoSQL database and parts in a HDFS cluster the answer to this question might become a bit harder and a lot of additional coding might be required to get a single overview.
Oracle announced “Oracle Big Data SQL” which is an “addition” to the SQL language which enables you to query data not only in the Oracle Database however also query, in the same select statement, data that resides in other places. Other places being Hadoop HDFS clusters and NoSQL databases. By extending the data dictionary of the Oracle database and allowing it to store information of data that is stored in NoSQL or Hadoop HDFS clusters Oracle can now make use of those sources in combination with the data stored in the database itself.

The Oracle Big Data SQL way of working will allow you to create single queries in your familiar SQL language however execute them on other platforms. The Oracle Big Data SQL implementation will take care of the translation to other languages while developers can stick to SQL as they are used to.


Oracle Big Data SQL is available with Oracle Database 12C in combination with theOracle Exadata Engineered system and the Oracle Big Data appliance engineered system. The use of Oracle Engineered systems make sense as you are able to use infiniband connections between the two systems to eliminate the network bottleneck. Also the entire design of pushing parts of a query to another system is in line with how Exadata works. In the Exadata machine the workload (or number crunching) is done for a large part not on the compute nodes but rather on the storage nodes. This ensures that more CPU cycles are available for other tasks and sorting, filtering and other things are done where they are supposed to be done, on the storage layer.

A similar strategy is what you see in the implementation of Oracle Big Data SQL. When a query (or part of a query) is pushed to the Oracle Big Data Appliance only the answer is send back and not a full set of data. This means that (again) the CPU’s of the database instance are not loaded with tasks that can be done somewhere else (on the Big Data Appliance).
The option to use Oracle Big Data SQL has a number of advantages to our customers, both on a technical as well as architectural and integration level. We can now lower the load on database instance CPU’s and are not forced to manual create connections between relations databases and NoSQL and Hadoop HDFS solutions. While on the other hand helps customers get rapid return on investment. Some Capgemini statements can be found on the Oracle website in a post by Peter Jeffcock and Brad Tewksbury from Oracle after the Oracle Key partner briefing on Oracle Big Data SQL.

Sunday, July 13, 2014

Oracle will take three years to become a cloud company

Traditional software vendors who have been relying on a steady income of license revenue are forced to ether change the standing business model radically or been overrun by new and upcoming companies. The change that cloud computing is bringing is by some industry analysts compared to the introduction of the Internet. The introduction and the rapid growth of the internet did start a complete new sub-industry in the IT industry and has created the IT-bubble which made numerous companies bankrupt when it did burst.

As the current standing companies see the thread and possibilities of cloud computing rising they are trying to change direction to ensure survival. Oracle, being one of the biggest enterprise oriented software vendors at this moment is currently changing direction and stepping into cloud computing full swing. This by extending on the more traditional way of doing business by providing tools to create private cloud solutions for customer and also by becoming the new cloud vendor in the form of IaaS, SaaS, DBaaS and some other forms of cloud computing.

According to a recent article from Investor Business Daily the transition for Oracle will take around three years to complete. Based upon Susan Anthony, an analyst for Mirabaud Securities, it will take around five years until cloud based solutions will contribute significantly more then the current license sales model;

"As the shift takes place, the software vendors' new license revenues will ... be replaced to some extent by the cloud-subscription model, which within three years will match the revenues that would have been generated by the equivalent perpetual license and, over five years, contribute significantly more"

The key to success for Oracle and for other companies will be to attract different minded people then they are currently have. The traditional way of thinking is so deeply embedded in the companies that a more cloud minded generation will be needed to help turn the cloud transformation for traditional companies into a success. Michael Turits, an analyst for Raymond James & Associates states the following on this critical success factor:

"It takes a lot to turn the battleship and transition a legacy (software) company into a cloud company, We believe they are hiring people to focus on cloud sales and that the incentive structure is being altered to speed the transition."

Analysts are united in the believe that this is a needed transition for Oracle to survive however that it will, on the short term will hurt the revenue stream of the company and by doing so it will negatively influence the stock price for the upcoming years. Rick Sherlund, a Nomura Securities analyst, wrote in a June 25 research note:

"Oracle, like other traditional, on-premise software vendors, will be financially disadvantaged over the short term as its upfront on-premise license revenues are cannibalized by the recurring cloud-based revenues, therefore, we model expected license revenues to be flat to down for the next two years (during) the transition."

Currently we can see the transition taking place, June 25, 2014, Mark Hurd presented the Oracle Cloud Strategy for the upcoming years. Not only the expansion in global datacenters for hosting the new business model however also the growth predictions for the upcoming years. As we look at the growth in datacenters you will be able to see that Oracle is serious about the cloud strategy and transformation.


The full presentation deck can be found embedded below: