Sunday, July 13, 2014

Oracle will take three years to become a cloud company

Traditional software vendors who have been relying on a steady income of license revenue are forced to ether change the standing business model radically or been overrun by new and upcoming companies. The change that cloud computing is bringing is by some industry analysts compared to the introduction of the Internet. The introduction and the rapid growth of the internet did start a complete new sub-industry in the IT industry and has created the IT-bubble which made numerous companies bankrupt when it did burst.

As the current standing companies see the thread and possibilities of cloud computing rising they are trying to change direction to ensure survival. Oracle, being one of the biggest enterprise oriented software vendors at this moment is currently changing direction and stepping into cloud computing full swing. This by extending on the more traditional way of doing business by providing tools to create private cloud solutions for customer and also by becoming the new cloud vendor in the form of IaaS, SaaS, DBaaS and some other forms of cloud computing.

According to a recent article from Investor Business Daily the transition for Oracle will take around three years to complete. Based upon Susan Anthony, an analyst for Mirabaud Securities, it will take around five years until cloud based solutions will contribute significantly more then the current license sales model;

"As the shift takes place, the software vendors' new license revenues will ... be replaced to some extent by the cloud-subscription model, which within three years will match the revenues that would have been generated by the equivalent perpetual license and, over five years, contribute significantly more"

The key to success for Oracle and for other companies will be to attract different minded people then they are currently have. The traditional way of thinking is so deeply embedded in the companies that a more cloud minded generation will be needed to help turn the cloud transformation for traditional companies into a success. Michael Turits, an analyst for Raymond James & Associates states the following on this critical success factor:

"It takes a lot to turn the battleship and transition a legacy (software) company into a cloud company, We believe they are hiring people to focus on cloud sales and that the incentive structure is being altered to speed the transition."

Analysts are united in the believe that this is a needed transition for Oracle to survive however that it will, on the short term will hurt the revenue stream of the company and by doing so it will negatively influence the stock price for the upcoming years. Rick Sherlund, a Nomura Securities analyst, wrote in a June 25 research note:

"Oracle, like other traditional, on-premise software vendors, will be financially disadvantaged over the short term as its upfront on-premise license revenues are cannibalized by the recurring cloud-based revenues, therefore, we model expected license revenues to be flat to down for the next two years (during) the transition."

Currently we can see the transition taking place, June 25, 2014, Mark Hurd presented the Oracle Cloud Strategy for the upcoming years. Not only the expansion in global datacenters for hosting the new business model however also the growth predictions for the upcoming years. As we look at the growth in datacenters you will be able to see that Oracle is serious about the cloud strategy and transformation.


The full presentation deck can be found embedded below:


Wednesday, July 09, 2014

Room rates based upon big data

Hotels traditionally do have flexible rates for their rooms, the never ending challenges for hotels however is, when to raise your rates and when to drop the rates. A common seen solution is to raise prices as closer to the date the room will be occupied and one or two days in advance drop the price if it is not sold yet. On average this is working quite well however it is a suboptimal and unsophisticated way of introducing dynamic pricing for hotel rooms. The real value of a room is depending on large set of parameters that are constantly changing.

For example the weather, vacations, conventions in town or airlines that are on strike will all influence the demand for rooms. If you are able to react to changing variables directly you will be able to make the average hotel room more profitable. Keeping track of all kinds of information from a large number of sources and benchmark this against results from the past is a extremely difficult task to manually or even to code a customer application for. Duetto recently raised $21M in venture capital from Accel Partners to expand their SaaS solution which is providing exactly this service to hotels.

Duetto provides a SaaS solution from the cloud that will keep track of all potential interesting data sources and mines this data to dynamically change the room rates for a hotel based upon the results.


By mining and processing big data Duetto is able to advise hotels on when to drop or raise the price. This can change in a moment notice and without the hotel employees to keep track of all things that are happening in the area. Duetto provides all hotels an easy solution for implementing intelligent dynamic pricing. The big advantage Duetto is offering is that it is a SaaS solution that is ready to run from day one instead of building a home grown solution which might take a long period to develop, test and benchmark before it will become usable.

It is not unlikely that Duetto will be expanding their services in the near future to other industries. The demand for dynamic pricing based upon big data will only be a growing market in the upcoming years and Duetto will be in an ideal position to expand their services to new growth markets.

Monday, July 07, 2014

Using R and Oracle Exadata

Currently the R language is the choice for statistical computing and is widely adopted in the commercial and scientific community performing statistical computing. R is a free statistical language developed in 1993 and released under the GNU General public license. Traditionally R has been used to do statistical computations on large sets of data and due to this it is seeing an adoption in the Big Data ecosystem even though it is not as widely adopted as for example the MapReduce programming paradigm which has it high adoption rate thanks to Apache Hadoop.

Even though, R is claiming its place in the Big Data ecosystem and is seeing enterprise grade adoption. Due to this there are a number of enterprise ready R implementations available. Oracle is one of the companies who have developed enterprise ready R named “Oracle R Enterprise”. Interesting about the Oracle R Enterprise distribution is that it will become a part of the database server itself.

In general the idea is that on the database server multiple R engines will be spawned and will work in parallel to execute the computations needed. Depending on your programming the results can be stored in the database, can be given to a workstation or can be sending to a Hadoop cluster to execute additional computations. As an addition to this, due to the Hadoop connector R inside the database server can potentially also make use of data inside the Hadoop cluster if needed. From a high level perspective this will look like the diagram below.


Oracle is providing engineered systems for both Big Data, Analytics and the Oracle database. This means we can also deploy the above outlined scenario on an engineered systems deployment. In the below diagram we will use a pure Oracle Engineered Systems solution however this is not required, you can use Oracle engineered systems where you deem them needed and leave them out where you do not need them. However, there are large benefits when deploying a full engineered system solution. 

In the above example diagram the deployment is using Oracle Exadata, Oracle Big Data Appliances and the Oracle Exalytics machine. By combining those you will benefit from both R and from the capabilities of the Oracle Engineered systems. When you are in need of deploying R for analytical computing and you are also using Oracle databases and applications on a wider scale in your IT landscape it will be extreemly beneficial to give Oracle Enterprise R a consideration and depending on the size of your data to combine this with Oracle Engineered systems.

Monday, June 30, 2014

Puppet and Oracle Enterprise Manager

Enterprises are using virtualization already for years as part of their datacenter strategy. Since recent you see that virtualization solutions are turning into private cloud solutions which enables business users even more to quickly request new systems and make full use of the benefits of cloud within the confinement of their own datacenter. A number of vendors provide both the software and the hardware to kickstart the deployment of a private cloud.

Oracle provides an engineered system in the form of the Oracle Virtual Compute Appliance which is a combination of pre-installed hardware which enables customers to get up and running in days instead of months. However, a similar solution can also be created “manually”. All software components are available separately from the OVCA. Central within the private cloud strategy from Oracle is Oracle Enterprise Manager 12C in combination with Oracle VM and Oracle Linux.

In the below diagram you can see a typical deployment of a private cloud solution based upon Oracle software.


As you can see in the above diagram Oracle Enterprise Manager plays a vital role in the Oracle private cloud architecture. Oracle positions the Oracle Enterprise Manager solution as the central monitoring and provisioning tooling for both the infrastructure components as well as application database components. Next to this Oracle Enterprise Manager is used for patching both operating system components as well as application and database components. In general Oracle positions the Oracle Enterprise Manager the central solution for your entire private cloud solution. Oracle Enterprise Manager ties in with Oracle VM Manager and enables customers to request / provision new virtual servers from an administrator role however also by using the cloud self service portals where users can create (and destroy) their own virtual servers. Before you can do so you however have to ensure that your Oracle VM Manager is connected to Oracle Enterprise Manager and that Oracle VM itself is configured. 

The initial steps to configure Oracle VM to be able to run virtual machines are outlined below and are commonly only needed once.


As you can observer quite a number of steps are needed before you will be able to create your first virtual machine. Not included in this diagram are the efforts needed to setup Oracle Enterprise Manager and combine it with Oracle VM Manager and on how to activate the self service portals that can be used by users to create a virtual machine without the need for an administrator. 

In general, when you create / provision a new virtual machine via Oracle tooling you will be making use of a template. A pre-defined template which can contain one or more virtual machines and which can potentially contain a full application or database. For this you can make use of the Oracle VM template builder or you can download pre-defined templates. The templates stored in your Oracle VM template repository can be used to provision a new virtual machine. Commonly used strategy is to use the right mix of things you put into your template and things you do configure and install using a first boot script which will be started the first time when a new virtual machine starts. even though you can do a lot in the first boot script this still will require you to potentially create and maintain a large set of templates which might differ substantially per application you would like to install or per environment it will be used in. 

In a more ideal scenario you will not be developing a large set of templates, in an ideal scenario you will only maintain one (or a very limited set of) templates and use other external tooling to ensure the correct actions are taken. Since recent Oracle has changed some of the policies and the Oracle Linux template for Oracle VM which you can download from the Oracle website is nothing more then a bare minimum installation where almost every package you might take for granted it missing. This means that there will not be a overhead of packages and services started that you do not want or need. the system will be fully your to configure. This configuration can be done by using first boot scripting which you would need to build and customize for a large part yourself or you can use external tooling for this. 

A good solution for this is to make use of puppet. This would mean that the first boot script only need to be able to install the puppet agent on the newly created virtual machine. By making use of node classification the puppet master will be able to see what the intended use of this new machine is and install packages and configure the machine accordingly.


Even though this is not a part of the Oracle best practices it is a solution for companies who do have a large set of different type of virtual machines they need to be able to provision automatically or semi automatically. By implementing puppet you will be able to keep the number of Oracle VM templates to a minimum and keep the maintenance on the template extremely limited. All changes to a certain type of virtual machine provisioning can be done by changing puppet scripts. An additional benefit is that this is seen as a non-intrusive customization to the Oracle VM way of working. This way you can stay true to the Oracle best practices and add the puppet best practices. 

As a warning, on the puppet website a number of scripts for Oracle databases and other Oracle software are available. Even though they do tend to work it is advised to be extremely contentious about using them and you should be aware that this might be harming your application and database software installation. it will be good to look at the inner workings of them before applying them in your production cloud. However, when tested and approved to be working for your needs they might be helping you to speed up deployments. 

Saturday, June 07, 2014

Oracle Cloud Periodic Table

Cloud computing and cloud in general is a well discussed topic which defines a new era of computing and how we think about computing and how this can be done in the new era. Defining the cloud is a hard thing and very much depend on your point of view. As many vendors have tried to describe what cloud computing is you might find that they all have a different explanation due to their point of view. This makes creating a single description of cloud computing hard. When you are looking for the pure definition of cloud computing the best source to turn to is NIST (National Institute of Standard and Technology) who have been giving a definition of cloud computing which might be one of the best ways of stating it.

The NIST definition lists five essential characteristics of cloud computing: on-demand self-service, broad network access, resource pooling, rapid elasticity or expansion, and measured service. It also lists three "service models" (software, platform and infrastructure), and four "deployment models" (private, community, public and hybrid) that together categorize ways to deliver cloud services. The definition is intended to serve as a means for broad comparisons of cloud services and deployment strategies, and to provide a baseline for discussion from what is cloud computing to how to best use cloud computing.

To help customers understand cloud and cloud computing better and to show that cloud computing is not a single solution however constist out of many solutions which can be combined to form other solutions Oracle has released a short video to create a mindset which uses the analogy with the Periodic Table of Elements which is called the Oracle Cloud Periodic Table.


This video shows the vision of Oracle on cloud computing, or at least a part of the vision and creates a mindset to understand that your specific cloud solution will most likely be the combination of a number of modules which are offered from within a cloud platform. Not only Oracle is making use of this model, it is a growing trend in hybrid clouds and is largely based upon open standards and the as-a-Service way of thinking. 


Wednesday, June 04, 2014

Define new blackout reasons in Oracle Enterprise Manager 12c

Oracle Enterprise Manager is positioned by Oracle as the standard solution for monitoring and managing Oracle and non-Oracle hardware and software. Oracle Enterprise manager is quickly becoming the standard tooling for many organisations who do have operate an Oracle implementation inside their corporate IT landscape.

Oracle Enterprise Manager monitors all targets and has the ability to alert administrators when an issue occurs. For example when a host or a database is down an alert is created and additional notification can be triggered.

In essence this is a great solution, however in cases where you do for example maintenance you do not want Oracle Enterprise Manager to send out an alert as you intentionally bring down some components or create a situation which might trigger an alert. For this Oracle provides you the ability to create a blackout from both the graphical user interface as well as the CLI. A blackout will prevent alerts to be send out and you can do your maintenance (as an example).

One of the things you will have to provide during the creation of a blackout is the blackout reason which is a pre-set description of the blackout. This is next to a free text description. The blackout reason will provide you to report on blackouts in a more efficient manner.

Oracle provides a number of blackout reasons, however your organisation might require other blackout reasons specific to your situation. When using the CLI you can query all the blackout reasons and you can also create new blackout reasons.

To get an overview of all blackout reasons that have been defined in your Oracle Enterprise Manager instance you can execute the following command to get an overview:

emcli get_blackout_reasons

In case the default set of blackout reasons are not providing you the blackout reason you do require you can create your own custom blackout reason. When using the CLI you can execute the following command:

emcli add_blackout_reason -name=""

As an example, if you need a reason which is named "DOWNTIME monthly cold backup" you should execute the following command which will ensure that next time you execute get_blackout_reasons you should see this new reason in the list. It also ensures it is directly available in the GUI version when creating a blackout.

emcli add_blackout_reason -name="DOWNTIME monthly cold backup"

Tuesday, June 03, 2014

Resolving missing YAML issue in Perl CPAN

When installing modules as part of your Perl ecosystem or when updating modules inside your Perl ecosystem you do most likely use CPAN or CPANM. CPAN stands for Comprehensive Perl Archive Network and is the default location where a large set of additional modules for the Perl programming language do reside and which you can download. CPAN provides an interactive CLI interface which you can use to install new modules. In essence CPAN is much more then just a tool and a download location, CPAN is a thriving ecosystem where people add new software and modules to the extending CPAN archive on a daily basis. The below image shows how the CPAN ecosystem looks like from a high level perspective;



When you use the CPAN CLI out of the box you might be hit with a number of warnings around YAML missing. YAML stands for, YAML Ain't Markup Language, and is a human friendly data serialization standard for all programming languages with implementatins for C/C++, Ruby, Python, Java, Perl, C#, .NET, PHP, OCaml, JavaScript, ActionScript, Haskell and a number of others.

The warning messages you get when using CPAN might look like the ones below;

"YAML' not installed, falling back to Data::Dumper and Storable to read prefs '/root/.cpan/prefs"

"Warning (usually harmless): 'YAML' not installed, will not store persistent state"

The way to resolve this is very simple, first ensure you have the YAML module installed. If this is the case ensure you inform CPAN that you would like to use YAML, this can be done by excuting the 2 commands below and this should resolve the issue of the repeating warnings when using CPAN.

o conf yaml_module YAML
o conf commit

Exalogic and Oracle Ops Center

Oracle Exalogic, as part of the Oracle Engineered systems portfolio, can be completely managed by making use of Oracle Enterprise Manager. It is within the strategy of Oracle to ensure all products can be tied into Oracle Enterprise Manager as the central maintenance and monitor solution for the enterprise. Traditionally Oracle Enterprise Manager finds its roots in a software monitor and management tool and is traditionally not used for hardware monitoring and maintenance. Oracle OPS center, originally from Sun, is build for this tasks.

Oracle has integrated the two solutions, Oracle Enterprise Manager and Oracle OPS center to form a single view. Even though under the hood it are still two different products you see that the integration is getting more and more mature and that the two products startt to act as one where the OEM12C core is used for software tasks and the OPS core is used for hardware tasks. The below image shows the split of the two products.
As the Oracle Exalogic engineered system consists out of a solution containing both hardware and software you will see that managing one or more instances of Oracle Exalogic will require you to have both products configured and work together to provide a full end-to-end monitoring and maintenance solution.

The below video gives a short introduction of the capabilities of Oracle OPS center in combination with Oracle Exalogic.


Tuesday, May 06, 2014

Enfore IPv4 on Oracle Linux instead op IPv6

When installing Oracle Linux server in a quick and dirty way and you boot your system the first time after the installation you will notice that it can be very well that you do not have a network interface while you might expect this. You might also note that it by default would like to use IPv6 instead of IPv4 which you still might require. The solution for getting this resolved is quite easy. However, remember, this blogpost acts more as a quick reference for myself for making changes to my test machines for internal testing and this should be done in a more proper manner when creating a real server for use within your company.

To ensure that your network interface (eth0) will start by default edit the following file:
/etc/sysconfig/network-scripts/ifcfg-eth0

Ensure you have the below line in this file:
ONBOOT=yes

This will ensure your server will start your network interface on boot. To ensure you will have IPv4 enabled and not IPv6 you can edit the following file:
/etc/sysconfig/network

Ensure you have the below line in this file:
NETWORKING_IPV6=no

This should ensure all is set to use your server as a test server by executing a ifconfig eth0 down and a ifconfig eth0 up you will bounce your network interface and you are good to go.

Thursday, April 24, 2014

Java 8 improved features overview

Oracle has released the new version of the Java programming language. The current version is Java 8. A number of changes and new features have been added to the new release. Also Oracle has been giving the upcoming internet of things a lot of thought and Oracle has ensured that you can now, even more, easily create extreme small java applications to run on devices. This is a great step into the direction of where Java will be ending up on more and more devices and will be claiming even more its place in the world of "internet of things".

The below video is a great and quick introduction into the new java 8 release.



The new Java release 8 includes the following new or improved features:

Java Programming Language

  • Lambda Expressions, a new language feature, has been introduced in this release. They enable you to treat functionality as a method argument, or code as data. Lambda expressions let you express instances of single-method interfaces (referred to as functional interfaces) more compactly.
  • Method references provide easy-to-read lambda expressions for methods that already have a name.
  • Default methods enable new functionality to be added to the interfaces of libraries and ensure binary compatibility with code written for older versions of those interfaces.
  • Repeating Annotations provide the ability to apply the same annotation type more than once to the same declaration or type use.
  • Type Annotations provide the ability to apply an annotation anywhere a type is used, not just on a declaration. Used with a pluggable type system, this feature enables improved type checking of your code.
  • Improved type inference.
  • Method parameter reflection.

Collections

  • Classes in the new java.util.stream package provide a Stream API to support functional-style operations on streams of elements. The Stream API is integrated into the Collections API, which enables bulk operations on collections, such as sequential or parallel map-reduce transformations.
  • Performance Improvement for HashMaps with Key Collisions
  • Compact Profiles contain predefined subsets of the Java SE platform and enable applications that do not require the entire Platform to be deployed and run on small devices.

Security

  • Client-side TLS 1.2 enabled by default
  • New variant of AccessController.doPrivileged that enables code to assert a subset of its privileges, without preventing the full traversal of the stack to check for other permissions
  • Stronger algorithms for password-based encryption
  • SSL/TLS Server Name Indication (SNI) Extension support in JSSE Server
  • Support for AEAD algorithms: The SunJCE provider is enhanced to support AES/GCM/NoPadding cipher implementation as well as GCM algorithm parameters. And the SunJSSE provider is enhanced to support AEAD mode based cipher suites. See Oracle Providers Documentation, JEP 115.
  • KeyStore enhancements, including the new Domain KeyStore type java.security.DomainLoadStoreParameter, and the new command option -importpassword for the keytool utility
  • SHA-224 Message Digests
  • Enhanced Support for NSA Suite B Cryptography
  • Better Support for High Entropy Random Number Generation
  • New java.security.cert.PKIXRevocationChecker class for configuring revocation checking of X.509 certificates
  • 64-bit PKCS11 for Windows
  • New rcache Types in Kerberos 5 Replay Caching
  • Support for Kerberos 5 Protocol Transition and Constrained Delegation
  • Kerberos 5 weak encryption types disabled by default
  • Unbound SASL for the GSS-API/Kerberos 5 mechanism
  • SASL service for multiple host names
  • JNI bridge to native JGSS on Mac OS X
  • Support for stronger strength ephemeral DH keys in the SunJSSE provider
  • Support for server-side cipher suites preference customization in JSSE

JavaFX

  • The new Modena theme has been implemented in this release. For more information, see the blog at fxexperience.com.
  • The new SwingNode class enables developers to embed Swing content into JavaFX applications. See the SwingNode javadoc and Embedding Swing Content in JavaFX Applications.
  • The new UI Controls include the DatePicker and the TreeTableView controls.
  • The javafx.print package provides the public classes for the JavaFX Printing API. See the javadoc for more information.
  • The 3D Graphics features now include 3D shapes, camera, lights, subscene, material, picking, and antialiasing. The new Shape3D (Box, Cylinder, MeshView, and Sphere subclasses), SubScene, Material, PickResult, LightBase (AmbientLight and PointLight subclasses) , and SceneAntialiasing API classes have been added to the JavaFX 3D Graphics library. The Camera API class has also been updated in this release. See the corresponding class javadoc for javafx.scene.shape.Shape3D, javafx.scene.SubScene, javafx.scene.paint.Material, javafx.scene.input.PickResult, javafx.scene.SceneAntialiasing, and the Getting Started with JavaFX 3D Graphics document.
  • The WebView class provides new features and improvements. Review Supported Features of HTML5 for more information about additional HTML5 features including Web Sockets, Web Workers, and Web Fonts.
  • Enhanced text support including bi-directional text and complex text scripts such as Thai and Hindi in controls, and multi-line, multi-style text in text nodes.
  • Support for Hi-DPI displays has been added in this release.
  • The CSS Styleable* classes became public API. See the javafx.css javadoc for more information.
  • The new ScheduledService class allows to automatically restart the service.
  • JavaFX is now available for ARM platforms. JDK for ARM includes the base, graphics and controls components of JavaFX.

Tools

  • The jjs command is provided to invoke the Nashorn engine.
  • The java command launches JavaFX applications.
  • The java man page has been reworked.
  • The jdeps command-line tool is provided for analyzing class files.
  • Java Management Extensions (JMX) provide remote access to diagnostic commands.
  • The jarsigner tool has an option for requesting a signed time stamp from a Time Stamping Authority (TSA).
  • Javac tool
  • The -parameters option of the javac command can be used to store formal parameter names and enable the Reflection API to retrieve formal parameter names.
  • The type rules for equality operators in the Java Language Specification (JLS) Section 15.21 are now correctly enforced by the javac command.
  • The javac tool now has support for checking the content of javadoc comments for issues that could lead to various problems, such as invalid HTML or accessibility issues, in the files that are generated when javadoc is run. The feature is enabled by the new -Xdoclint option. For more details, see the output from running "javac -X". This feature is also available in the javadoc tool, and is enabled there by default.
  • The javac tool now provides the ability to generate native headers, as needed. This removes the need to run the javah tool as a separate step in the build pipeline. The feature is enabled in javac by using the new -h option, which is used to specify a directory in which the header files should be written. Header files will be generated for any class which has either native methods, or constant fields annotated with a new annotation of type java.lang.annotation.Native.

Javadoc tool

  • The javadoc tool supports the new DocTree API that enables you to traverse Javadoc comments as abstract syntax trees.
  • The javadoc tool supports the new Javadoc Access API that enables you to invoke the Javadoc tool directly from a Java application, without executing a new process. See the javadoc what's new page for more information.
  • The javadoc tool now has support for checking the content of javadoc comments for issues that could lead to various problems, such as invalid HTML or accessibility issues, in the files that are generated when javadoc is run. The feature is enabled by default, and can also be controlled by the new -Xdoclint option. For more details, see the output from running "javadoc -X". This feature is also available in the javac tool, although it is not enabled by default there.

Internationalization

  • Unicode Enhancements, including support for Unicode 6.2.0
  • Adoption of Unicode CLDR Data and the java.locale.providers System Property
  • New Calendar and Locale APIs
  • Ability to Install a Custom Resource Bundle as an Extension

Deployment

  • For sandbox applets and Java Web Start applications, URLPermission is now used to allow connections back to the server from which they were started. SocketPermission is no longer granted.
  • The Permissions attribute is required in the JAR file manifest of the main JAR file at all security levels.

Date-Time Package

  • A new set of packages that provide a comprehensive date-time model.

Scripting 

  • Nashorn Javascript Engine

Pack200

  • Pack200 Support for Constant Pool Entries and New Bytecodes Introduced by JSR 292
  • JDK8 support for class files changes specified by JSR-292, JSR-308 and JSR-335

IO and NIO

  • New SelectorProvider implementation for Solaris based on the Solaris event port mechanism. To use, run with the system property java.nio.channels.spi.Selector set to the value sun.nio.ch.EventPortSelectorProvider.
  • Decrease in the size of the /jre/lib/charsets.jar file
  • Performance improvement for the java.lang.String(byte[], *) constructor and the java.lang.String.getBytes() method.

java.lang and java.util Packages
  • Parallel Array Sorting
  • Standard Encoding and Decoding Base64
  • Unsigned Arithmetic Support
JDBC
  • The JDBC-ODBC Bridge has been removed.
  • JDBC 4.2 introduces new features.
Java DB 

  •  JDK 8 includes Java DB 10.10.
Networking
  • The class java.net.URLPermission has been added.
  • In the class java.net.HttpURLConnection, if a security manager is installed, calls that request to open a connection require permission.
Concurrency
  • Classes and interfaces have been added to the java.util.concurrent package.
  • Methods have been added to the java.util.concurrent.ConcurrentHashMap class to support aggregate operations based on the newly added streams facility and lambda expressions.
  • Classes have been added to the java.util.concurrent.atomic package to support scalable updatable variables.
  • Methods have been added to the java.util.concurrent.ForkJoinPool class to support a common pool.
  • The java.util.concurrent.locks.StampedLock class has been added to provide a capability-based lock with three modes for controlling read/write access.
Java XML - JAXP

Monday, April 21, 2014

Security: How to Focus on Risk that Matters

Security is one, or should be one, of the most important areas in your entire IT landscape. It should be on the priority list in all layers of your IT organisation and everyone should be aware and involved up to a certain level. Issue with security is that it is not always clear where you need to put your focus, which parts are important and which parts are less important (however still important). The people at Rapid7 have put together a nice webcast to help you understand some more about how to put priority to certain things in your security strategy.


You can watch the recording of this webcast here:
All assets aren’t created equal – and they shouldn’t be treated the same way.  Security professionals know the secret to running an effective risk management program is providing business context to risk.  However, It’s easier said than done. Every organization is unique: all have different combinations of systems, users, business models, compliance requirements, and vulnerabilities.  Many security products tell you what risk you should focus on first, but don’t take into account the unique make up and priorities of each organization.

Night Vision For Your Network: How to Focus on Risk that Matters



Upgrade Oracle APEX ORA-22288 error resolved

Oracle APEX provides you a very nice and easy platform to build small (or even large) web-based applications within the Oracle APEX framework on top of an Oracle database. For developers who do want to work with Oracle APEX on their own laptop and who do not want to deploy this directly on their workstations operating system there is the option to download a complete Linux operating system with a working APEX installation. One of the things you see with downloading a virtual image is that they are not always up to the latest version and patch-level. In essence this is not an issue because you are using it as a local test and development system.

However, in some cases you might want to be on the latest version of APEX because you would like to work with some of the latest features available. Upgrading APEX is quite easy however there are some things you have to keep in mind to save you some time and some frustration.

The steps to upgrade to APEX 4.0 (and 4.*) are described by Oracle as below:

1) Download the latest version of Oracle APEX

2) Unzip the zip file, preferably in a location with a short path. For example /home/oracle

3) Change your working directory to the unzipped apex directory. For example /home/oracle/apex

4) Login to the database:
$ sqlplus /nolog
SQL> CONNECT SYS as SYSDBA
Enter Password:
SYS_Password

5) Execute the first part of the installation:
SQL> @apexins SYSAUX SYSAUX TEMP /i/

6) The previous step will log you out of the database, log back into the database as described above.

7) Execute the below command where APEX_HOME is the location of where you have unzipped the installation software (NOTE1)
SQL> @apxldimg.sql APEX_HOME

8) Execute the below command:
SQL> @apxchpwd

9) Open your browser and check if the installation was a success by opening http://localhost:8080/apex/apex_admin

In esscence these are all the steps you need to complete for your installation / upgrade of Oracle APEX to the latest version. If all is OK without any errors you could be done in a couple of minutes and ready to start developing and testing with the latest version of Oracle APEX. However, there is one small catch to it, refer to NOTE1 below which you need to keep in mind when executing step 7.



NOTE1:
The Oracle documentation states exactly the following:
SQL> @apxldimg.sql APEX_HOME
[Note: APEX_HOME is the directory you specified when unzipping the file. For example, with Windows 'C:\'.]

If you do exactly this you should be fine and everything should be running as expecting. However, you have to read the line carefully. You have to specify the location where you unzipped the file. For example /home/oracle issue is that a lot of people (me included) do not read this correctly and do think that the script will need some other scripts and for this reason you have to state the location where the installation software is located. This can be for example /home/oracle/apex. This is however incorrect.

The installation software will, at a certain point, start looking for the images it needs to load and will extend the given path with /apex/images. If you provide the wrong path (descending into the unzipped apex location) you might get the below error when executing one of the steps:

SQL> @apxldimg.sql /home/oracle
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Directory created.
Directory created.
declare
*
ERROR at line 1:
ORA-22288: file or LOB operation FILEOPEN failed 
The system cannot find the path specified. 
ORA-06512: at "SYS.DBMS_LOB", line 523 
ORA-06512: at "SYS.XMLTYPE", line 287 
ORA-06512: at line 17 
Commit complete.
timing for: Load Images
Elapsed: 00:00:00.03
Directory dropped.
Directory dropped.

While, if you do it correctly you will get the below output:
SQL> @apxldimg.sql /home/oracle
PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.
. Loading images directory: /home/oracle/apex/images
Directory created.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

Commit complete.

Directory dropped.
timing for: Load Images
Elapsed: 00:02:36.41
SQL>

Meaning, selecting the wrong path, and not following the instructions from Oracle to the letter, even though it is not described very clearly, might result in a situation where your upgrade/installation is not going as you might expect. 

Saturday, April 19, 2014

Oracle Database Backup Service explained

Oracle databases are commonly used for mission critical systems, in many cases databases are configured in a high availability setup spanning two or more datacenters. Even though a dual or triple datacenter is protecting you against a number of risks, like for example a fire in one of the datacenters it is not excusing you from implementing a proper backup and recovery strategy. In cases where your data is corrupted or for any other reason you need to consult a backup you will most likely rely on Oracle RMAN. RMAN used the default way for backup and recovery and ships with the Oracle database.

The below diagram shows a proper way of conducting backups. In this case all the data in database A and B is written to the tape library in another datacenter. Databases C and D write the data to the other datacenter. This ensures that your data is always at two locations. If for some reason datacenter-1 should be considered a total loss you can still recover your data from the other datacenter. For mission critical systems you most likely also will have a standby database in the other datacenter however this is not included in this diagram.

Even though this is considered a best practice it is for some companies a costly implementation. Specially smaller companies do not want to invest in a dual, or even triple, datacenter architecture. For this reason you commonly see that the data is written to tape in the same datacenter as the database is hosted and that a person is collecting the tapes on a daily basis. Or, in some worst case scenarios the tapes just reside in the same datacenter. This holds that in case of a fire the entire data collection of a company can be considered lost.

Oracle has recently introduced a solution for this issue by adding a cloud backup service to its cloud services portfolio. The Oracle database backup cloud provides an option to keep using your standard RMAN tooling, however instead of talking to a local tape library, or one in another datacenter, you will be writing your backup to the Oracle cloud. This cloud service, named Oracle Database Backup Service requires you to install the Oracle Database Cloud Backup Module on your database server. You can use the installed module as a RMAN channel to do your backup. By using encryption and compression you can ensure that your backup is send quickly and secure to the Oracle backup Service.


The above diagram shows the flow used in case you backup to the Oracle database backup service. This model is working when you have, for example, only a single datacenter. However it can also work as a strategic model when you have multiple datacenters and even if you have mixed this with cloud based hosting.

The above diagram shows how you can use the Oracle database backup service to do a cloud to cloud backup. If you, for example, host your database at Azure or Amazon and you would like to backup your data at the same backup service providers as all your other datacenters are using. Or you want to have it at Oracle to ensure your data is not with one single company, you can use the same mechanism to perform the backup to the Oracle Database Backup Service.

Creating an account at Oracle and ordering backup space is easy and can be done completely online. As you can see from the screenshot below you can order per terabyte of backup space.


One thing you have to keep in mind, as with all cloud based solutions. There are some legal considerations you need to review. When using the Oracle Database Backup Service you are moving your data away from your company and into the trust of another company. Oracle has provided numerous security options to ensure your data is safe, however, from a legal point of view you have to be sure you are allowed to move the data into the trust of Oracle. For most US based companies this will not be an issue, for US based government agencies and non-US companies it is something you might want to check with your legal department, just to be sure.

Friday, April 18, 2014

Enterprise cloud spending $235B

Companies are moving to the cloud. The trend is more and more to move business functions to cloud based solutions. A couple of years ago companies where not including cloud in the main consideration when thinking about new or improved IT solutions. Currently on almost every shortlist we do see cloud based solutions as a viable option. This is showing in the forecasts and the history of spendings on cloud technology and cloud based architectures where companies are deploying  enterprise functionality on.

Ryan Huang reports on the ZDnet page the growth of cloud based spending and the forecast voor 2017. Below you can see the graph showing the rise of cloud spending in the upcoming years as predicted.


This prediction shows that all companies who are currently investing in building cloud based platforms are making a solid investment as the trend is that the cloud based solutions, and associated customer investment, will continue to grow, for all good reasons. 

Monday, April 14, 2014

Oracle Big Data Appliance node layout

The Oracle Big Data Appliance ships in a full rack configuration with 18 compute nodes that you can use. Each node is a Sun X4-2L or a X3-2L server depending on the fact if you purchased a X4-2 or X3-2 Big data appliance. Both models do however provide you with 18 compute nodes.  The below image shows the rack layout of both the X3-2 and X4-2 rack. Important to note is that the number of servers is done bottom up. A starter rack has only 6 compute nodes and you can expand the rack with in-rack expansions of 6 nodes. Meaning, you can grow from 6 to 12 to a full rack as your capacity needs grow.


In every setup, regardless of the fact if you have a starter rack, a full rack or a rack extended with a single in-rack expansions of 6 nodes (making it a 12 node cluster), 1, 2 and 3 do have a special role. As we go bottom up, starting with node 1 we have the following software running on the nodes:

Node 1:
First NameNode
Zookeeper
Failover controller
Balancer
Puppet master
Puppet agent
NoSQL database
Datanode

Node 2:
Second NameNode
Zookeeper
Failover controller
MySQL backup server
NoSQL Admin
Puppet agent
DataNode

Node 3:
Job Tracker
Zookeeper
CMServer
ODI Agent
MySQL primary server
Hue
Hive
Beeswax
Puppet Agent
NoSQL
DataNode

Node 4 – till 6/12/18
Datanode
Tasktracker
Cloudera manager Agent
Puppet Agent
NoSQL

Understanding what runs where is of vital importance when you are working with an Oracle Big Data appliance. It helps you understand what parts can be brought down without having to much effect on the system and which parts you should be more careful about. As you can see from the above list there are some parts that are made high available and there are some parts that will result in loss of service when brought down for maintenance.