Wednesday, March 20, 2013

jDeveloper versions for SOA BPEL extensions

When developing Oracle SOA and BPEL components with Oracle jDeveloper you do want to include the extensions for this  to your jDeveloper environment. When selecting a version of jDeveloper you have to be carefull as not all the versions do support this out of the box. When you are reading the information from the download page of jDeveloper  att he Oracle technology network you will see a warning about this subject at certain 11.1.2.x versions.


Important Note - This version of JDeveloper doesn't include the SOA, WebCenter, and Oracle Cloud development features - to use these components you'll need to download Oracle JDeveloper 11.1.1.6.0.


This indicates you will have to download a 11.1.1 version instead of a 11.1.2 version to be able to run the SOA, WebCenter and Oracle Cloud development features. If you check the release notes of the jDeveloper sherman release (11.1.2) you can find the following statement:


There is no design-time or runtime support for SOA, BPM, and WebCenter components in 11g Release 2.


Oracle has stated that release 2 will be focussing primarily around ADF and where release 1 is focusing around SOA components. This essentially makes it 2 different products both dedicated to there own task. When you like to develop SOA components you will have to select release 1, when developing ADF you have to select release 2. This is if you want to make use of all the best solutions build into jDeveloper.

For people who are developing both ADF and SOA components this is providing a possible issue. According to Oracle this would mean that you will have to install 2 versions of jDeveloper on the same workstation to be able to work with it correctly. Even though this is not a direct issue it is not desirable. According to unconfirmed sources the "issue" should be resolved in the 12C release of Oracle jDeveloper.

There are unofficial ways to get this working within one version however it is not recommended  If you have a release 2 installed on your workstation and are in need to build both ADF and SOA components the best way is to uninstall jDeveloper release 2 and install the latest release 1 which is 11.1.1.6..0

When uninstalling Oracle the best way to remove an already installed version of jDeveloper is by making use of the uninstall utility. A large number of people just delete the installation however jDeveloper comes with a quit OK uninstall option which can be found at $JDEVHOME/utils/uninstall/

Tuesday, March 19, 2013

The mobile keyboard layout Minuum

As long as there are computers that use a keyboard people have been developing new keyboard layouts. Keyboards differ between countries due to special characters in the language or they differ just because something picked up in a specific country and something else did not. with he growing adoption of mobile phones and tablets at a later stage people have been looking at how to include keyboards on smaller surfaces then your average keyboard. Some inventions have been developed to interact in different ways with the suer. We have seen T9 as a predictive text mechanism, we have also seen swype as a new way of interacting with the keyboard and many others.

Now a new a promising new way of interacting with a keyboard is being developed by Minuum. The Minuum keyboard layout let you type on every surface of any size. Specially the ability to type on a very small surface is important as we see more and more smaller devices come to the market where we do have a need of interacting with.

The below video is showing how the Minuum keyboard is working. Specially the keyboard option attached to the watch is something that is quite nice. If we think about this in combination with a project like Google Glass this can become very interesting in the future.

Face recognition more then only for security

Face recognition is something which triggers immediately recollections to the movie Minority Report for most of us. For some time is has been something from science fiction movies until a couple of years back when we started to find real-world face recognition in our day-to-day life. For people working in highly secured environments they have encountered it when entering restricted areas where a combination of an ID-card and fingerprint and face recognition stated if you could enter a secured area or not.

Today you see more and more face recognition  systems in combination with security. Companies like Artec and Biometry develop products to help companies and governments regulate access to restricted areas.



For the general public face recognition became something in their day-to-day life when social networks started to use face recognition to help you tag friends in pictures when you upload them to a social network. Facebook is one of those social networks that started to use this technology. For implementing it they received a lot of comments from the privacy community and goverment agencies.  On the Facebook blog you can find some more information on the why and how they implemented this option;

Because photos are such an important part of Facebook, we want to be sure you know exactly how tag suggestions work: When you or a friend upload new photos, we use face recognition software—similar to that found in many photo editing tools—to match your new photos to other photos you're tagged in. We group similar photos together and, whenever possible, suggest the name of the friend in the photos.

Face recognition is a controversial field and facebook has received a lot comments on using this feature and governments and privacy organizations have been pushing to remove this option. In some cases they have succeeded and Facebook has been forced to disable the feature and remove the data for face recognition as can be found in this article of the Daily Mail.

In general face recognition makes use of principal component analysis to analyze a face and match this to a set of data stored in a database. Principal component analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The number of principal components is less than or equal to the number of original variables. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it be orthogonal to (i.e., uncorrelated with) the preceding components. Principal components are guaranteed to be independent only if the data set is jointly normally distributed. PCA is sensitive to the relative scaling of the original variables. Depending on the field of application, it is also named the discrete Karhunen–Loève transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD).

However, next to the fields of security and a help in tagging friends in pictures there are some other fields in which this is used. For example security forces use it to find and track people in public places, casino's as well use it to find people who cheat and pick them out of the visitors when they visit a casino again. Software vendors are using it to replace the standard username and password authentication for operating systems and phone's. We are most likely to much more coming our way as developers do have now access to opensource implementations of face recognition software via the OpenBR project

A new way of using the algorithms to recognize a face is an implementation done by the people from Ditto Technologies Inc. Ditto has developed a system that scans your face and find the perfect matching glasses for you based upon the biometric information received from a scan of your face.

As we can see on the Ditto site, face recognition and the associated algorithms might be able to help us in other fields we did not think of yet at this moment. Picking glasses is just one of them. Looking at current tech startup companies we can see quite a lot of companies who are doing something with face recognition and this field will grow in the future even more.

Friday, March 01, 2013

big data visualisation


A large part of the effort in developing solutions around bigger data are concentrating around retrieving data, storing data and getting meaning out of this data. For a lot of engineers and developers getting meaning out of the vast amount of data is the end of the task. The system, for example a Hadoop implementation, has crunched vast amounts of data and is providing the answer in the form of a array of data or a file of consolidated answers. Coming from a large pile of structured and unstructured data this is a huge accomplishment however it is not the end of things.

The next step and in a large number of cases is getting this set of data, which is the result of your big data analysis, to the users. To be more precise, how do we get it there in a usable form. Receiving a flat text file is not always the best way of delivering your results to a user or customer. This might be the correct way to provide it to an analyst who will do more analysis on it however for a user or customer who quickly wants to see the information this is not the correct way most likely. 

One of the fields of big data and the way we work with computers and computer systems in relation to data will be big data visualisation. Big data visualisation will be working focusing around how do we handle massive amounts of data and how do we make sure this is represented in such a way that it is understandable by a human. Big data and the vast amount of data that is generally associated with big data solutions is no longer interpretable by a human, that is why we have machine who do this for us. However now we have to focus on how to ensure that the outcome of the big data analysis is visualize in a human interpretable format.

Below a high-level diagram is showing a setup for a distributed sensor network. Showcasing in this diagram are the sensors picking up signals and transmitting them to the sensor server in a M2M (machine to machine) fashion. This completely is part of the "sensor Domain" in the diagram. The sensor server is placing the retrieved data in to the Hadoop cluster by placing it on the HDFS (Hadoop Distributed File System) within the Hadoop Domain. 


Data within the Hadoop domain is often referred to as the data lake. Big data stored in HDFS, or as stated the data lake, is at this moment unstructured data. By using MapReduce on the data placed by the sensor server within HDFS we can make more meaning of this unstructured data. This is commonly where the point where most developers on Hadoop and MapReduce will stop. Providing the consolidated and structured data and answers via MapReduce is the endpoint. 

The next step however is presenting this data to the users and customer or to a analyst team who will do  more analysis on this data. For this it is needed to load the results of your MapReduce job into a database. In our case as shown above this is a database within the BI Domain.  The structured data resulting out of the MapReduce job can now be used within other applications or BI tools to be presented in a human readable way. 

When you have a dominant Oracle landscape it can make sense to deploy Oracle BI as a tool to connect to the database holding the MapReduce results. One of the benefits of this is that OBIEE will also allow you to use multiple datasources. Meaning that you can use the MapReduce results coming from your sensor network in combination with for example you sales data in the Oracle eBS database. OBIEE is providing some great ways of visualising data, it gives your users and customers a view of the results in multiple ways which are very human readable. 

However, when using Oracle BI tools you are limited to the options provided by the tool, this is the same for most of the tools coming from other Vendors. What I expect we will see in the upcoming future is the rise of new companies who specialise in data visualisation. One of the example of such a company is Periscopic who specialise in this field. Also we do see a lot of online companies and services coming to live providing support in visualising data. 

Also a lot of good opensource libraries are coming to life where you can download (and contribute) to open ways of visualising sets of data. A very cool implementation of opensource data visualisation if D3JS. Some examples of data visualisation done by making use of D3JS are shown below.