Tuesday, October 06, 2015

Oracle VM - selecting a DR layer

When developing a new solution, including both the application, data-store and infrastructure components, one of the questions to ask is on which layer to build resilience against failure. On which level of the stack will you protect against failure of a component and on which level will your disaster recovery focus. In essence the answer is quite simple, you should ensure that disaster recovery is safeguarded as high as possible in the stack. The true answer is a very complex answer which includes disaster recovery, high availability and maximum availability components. Building a solution which is resilient against failure is a very complex process in which every component needs to be taken into account. However, making sure that you have disaster recovery as high up in the stack as possible will make your life much more easy.

As an example we take the below image which shows a application centered disaster recovery solution based within a virtualized environment with Oracle VM.

Within this solution applications will run in a active active setup in both site A as well as site B. Information between the two sites is kept in sync by making use of the MAA maximum Availability Architecture principles from Oracle. This means that when a site fails the application will still be able to function as it will run on the other site. Users should not face any downtime and should not even be aware that one of the two sites has been lost due to a disaster.

The application centered disaster recovery solution is the most resilient solution against disasters and the loss of a site. However, in some cases it is not feasible to run a architecture as shown above and you would still like to be able to perform a disaster recovery of the virtual machines running within your deployment. A solution to this is making use of block replication on a storage level and allowing your recovery site (site B) to start the VM's in case your site A is lost.

Within this model you will replicate all storage associated with the VM's from site A to a storage repository within site B. In essence this is an exact copy of the VM, however, on site B the machine is in a stopped state. This is also represented in the diagram below where you can more clearly see the replication of storage on the two sides. For this solution you can use storage block replication in a way that your storage appliance is supporting.

In case of a failure you have to ensure that all machines are stopped on site A, after this you can make the storage on site B readable and writable and start the virtual machines. This might not be the most ideal solution in comparison with disaster recovery in the higher levels of the stack, however, in case you are forced to ensure disaster recovery on a infrastructure / VM layer instead of a application level this is a solution that can be used. 

For more information, also view the slidedeck below.

Friday, October 02, 2015

Oracle Linux - detect security issues

When operating a large landscape of Linux machines, in our case a large landscape of Oracle Linux machines security is one of the vital things to keep in mind. In an ideal world all your Linux deployments would be of exactly the same version and contain exactly the same level of patching. In an ideal world no machine would differ from another machine and in this same ideal world you would be able to run a yum update command on all machines and would never face any issue nor would you be required to talk to end-customers or other tech team. However, even though in some situations you are able to maintain such a situation, commonly it is seen that a landscape of servers is equally patched and in some cases servers are not patched for a long period of time. This is not necessarily due to bad maintenance by the Linux administrators, commonly it is related to pressure from the business not to change the systems or not getting approval from a change advisory board.

When it comes down to new or improved functionality which can come with a Linux patch this might be acceptable. However, in case of missing a security patch this might be much more serious. Oracle Enterprise Manager provides, in combination with Yum a solution to show which patches need to be applied on which system. However, also a different solution can be used specifically to identify which security issues have not been addressed in a specific system.

To get an overview of which security vulnerabilities are on your system you can use OpenSCAP. OpenSCAP is based upon SCAP is a line of standards managed by NIST. It was created to provide a standardized approach to maintaining the security of enterprise systems, such as automatically verifying the presence of patches, checking system security configuration settings, and examining systems for signs of compromise.

Oracle provides a OVAL®: Open Vulnerability and Assessment Language XML file which you can use in combination with OpenSCAP to run against your Oracle Linux deployement to get a quick overview of what needs attention on your system and what looks to be correct. If you refer to the Oracle Linux security guide you can find more information around this subject.

After you have installed the needed components with using a Yum command you will have to download the Oracle Linux specific components, or in more detail, the Oracle Linux ELSA file in OVAL format. Oracle provides this file in year files where each year file contains the information on security issues found during that year. As an example, if you wanted to run an audit against the ELSA file of 2015 you need to perform the following steps:

1) Download the ELSA information in the OVAL format and extract it from the bz2 file
wget http://linux.oracle.com/security/oval/com.oracle.elsa-2015.xml.bz2
bzip2 -d com.oracle.elsa-2015.xml.bz2

2) Run the audit. In this case we send both the XML result as well as the HTML report to /tmp however you are free to select any location you want.
oscap oval eval --results /tmp/elsa-results-oval-2015.xml --report /tmp/elsa-report-2015.html ./com.oracle.elsa-2015.xml

This will produce a rather large output to the screen which provides some quick information however the more valuable information can be found in both the XML result as well as in the HTML report which we have send to /tmp . For references the below is the shell output of the audit on the 2015 file which I ran against a Oracle Linux 3.8.13-98.2.2.el7uek.x86_64 implementation:
[root@localhost oscap]# oscap oval eval --results /tmp/elsa-results-oval-2015.xml --report /tmp/elsa-report-2015.html ./com.oracle.elsa-2015.xml
Definition oval:com.oracle.elsa:def:20153073: false
Definition oval:com.oracle.elsa:def:20153072: false
Definition oval:com.oracle.elsa:def:20153071: true

//------------ SNIP SNIP ------------//

Definition oval:com.oracle.elsa:def:20150166: false
Definition oval:com.oracle.elsa:def:20150165: false
Definition oval:com.oracle.elsa:def:201501641: false
Definition oval:com.oracle.elsa:def:20150164: false
Definition oval:com.oracle.elsa:def:20150118: false
Definition oval:com.oracle.elsa:def:20150102: true
Definition oval:com.oracle.elsa:def:20150100: false
Definition oval:com.oracle.elsa:def:20150092: false
Definition oval:com.oracle.elsa:def:20150090: false
Definition oval:com.oracle.elsa:def:20150087: false
Definition oval:com.oracle.elsa:def:20150085: false
Definition oval:com.oracle.elsa:def:20150074: false
Definition oval:com.oracle.elsa:def:20150069: false
Definition oval:com.oracle.elsa:def:20150068: false
Definition oval:com.oracle.elsa:def:20150067: false
Definition oval:com.oracle.elsa:def:20150066: false
Definition oval:com.oracle.elsa:def:20150047: false
Definition oval:com.oracle.elsa:def:20150046: false
Definition oval:com.oracle.elsa:def:20150016: false
Definition oval:com.oracle.elsa:def:20150008: false
Evaluation done.
[root@localhost oscap]# 

3) Review the results (and take action)
You will have to review the results, which can be done by looking at the HTML report or you can run a parser against the XML output to do a more automated way of checking the results. In case you run a large number of Oracle Linux machines and you like to use the oscap way of checking parts of your security you most likely want to have the xml files somewhere in a central location so you do not need to connect all your machines to the public internet and you most likley want to run this in a scheduled form and interpret the results in a automated manner. The HTML file is usable for human reading, however, the XML file is something you would like to parse and use in case you have more then x servers. 

Saturday, September 26, 2015

Oracle Linux - SSH slow login

When you run a lot of test installations of Oracle Linux, like I do on my laptop and my home Oracle VM installation, you do not have them all configured in DNS. When hopping from Linux machine to Linux machine using SSH you do often have the situation that there is a long time between the moment you enter your username and the moment you are asked for the password. Reason for this is that the SSH deamon by default will try to do a DNS lookup to retrieve the machine name from DNS that you use to login to. When running Oracle Linux in a operational environment where you most likely need to have an audit trail this is absolutely a good way of working. However, in case you run multiple lab an play machines in your local environment this is not needed and the wait between the moment you enter your username and the moment you are asked for the password is quickly becoming an annoyance.

To change the behaviour of the SSH deamon you will need to change the configuration file /etc/ssh/sshd_config. You have to ensure that UseDNS no is included in the file. Standard deployment of Oracle Linux is that UseDNS yes is commented out of the configuration. The default behavior is already set to yes so you have to explicitly include "UseDNS no". To esure that the settings are applied restart the service after you made the changes to the file.

[root@localhost ~]# service sshd restart
Redirecting to /bin/systemctl restart  sshd.service
[root@localhost ~]#

Friday, September 25, 2015

Oracle VM - Anti-Affinity Groups

Using Oacle VM to virtualize machines and run them as virtual machines is a great way to reduce the number of physical machines you need to run a large estate of virtual machines. In general cloud administrators should not be worried about the exact physical machine a virtual machine is started. Oracle VM will select, based upon an algorithm on which physical machine the virtual machine will start. However, in some cases you should very much need to worry about the fact where a virtual machine will start. Especially that you do not want a virtual machine started on the same machine as some other specific virtual machines are running.

An example of this is when you run a high availability cluster of databases on virtual machines. What you want to prevent is that all nodes of the cluster run on the same physical hardware. Obvious reason for this is that you do not want your entire cluster to fail in case one physical box fails. For a long time you where required to ensure this manually or use custom scripting. However, within Oracle VM server you now have the Anti-Affinity groups option.

Virtual machines placed in the same Anti-Affinity group will not be started on the same physical hardware. Meaning, if you have a cluster of n virtual machines, all hosting a node of the same database cluster you can place them into one Anti-Affinity group and the algorithm responsible for selecting a physical machine to start the virtual machine will take this into account.

The above image shows a Anti-Affinity group named MyAAGroup which holds 3 virtual machines all members of database cluster. By placing them into the same Anti-Affinity group you will have the insurance that they will never be started on the same physical hardware and by doing so you will honor the Maximum Availability Architecture principles in this specific area. 

Oracle Linux - change IO scheduler

When installing Oracle Linux you will be equipped with a I/O scheduler which is perfectly usable for a database. This is not surprising as Oracle is from origin a database vendor. Input/output (I/O) scheduling is the method that computer operating systems use to decide in which order the block I/O operations will be submitted to storage volumes. I/O scheduling is sometimes called disk scheduling. However, even though the fact that you get a good scheduler there might be a need to change the default scheduler for some reason. A number of reasons can be thought of, depending on the type of I/O your performance can improve by selecting a different scheduler.

The below image shows the overall view on the Linux Storage Stack which includes the scheduler within the Block Layer. This image shows the full stack which includes more components then only the I/O scheduler.

In case you want to check what the current I/O scheduler is for a specific device you can do this by using the following command (for example for sda):

cat /sys/block/sda/queue/scheduler

This will show you the current scheduler that is used. For example, the output could be the one shown in the example below:

[root@demo1 etc]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@demo1 etc]# 

In case you need to change this there are some differences in how to do this. To activate a different scheduler, for example change it to cfq you can cat the new scheduler in by using the below commands:

[root@demo1 etc]#
[root@demo1 etc]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@demo1 etc]# cat cfq > /sys/block/sda/queue/scheduler
[root@demo1 etc]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[root@demo1 etc]#

As you can see this has changed the scheduler to cfq and no longer deadline is selected as the standard I/O scheduler. To make things persistent you have to change the grub configuration. When you are using grub2 the process is a bit different from using grub which is still in most standing Linux implementations. When using grub2 you have to edit the default grub2 file which is located at /etc/default/grub here you will have to add the new scheduler to GRUB_CMDLINE_LINUX which could look like the example below:

GRUB_CMDLINE_LINUX="crashkernel=auto  vconsole.font=latarcyrheb-sun16 rd.lvm.lv=ol/swap rd.lvm.lv=ol/root vconsole.keymap=us rhgb quiet"

If we, for example, like to make cfq persitant we have to change the line into the example below by add elevator=cfq to it:

GRUB_CMDLINE_LINUX="crashkernel=auto  vconsole.font=latarcyrheb-sun16 rd.lvm.lv=ol/swap rd.lvm.lv=ol/root vconsole.keymap=us rhgb quiet elevator=cfq"

This only has placed the new information into the defaults file and not yet into the grub2.cfg file where it is needed during boot. To ensure this is added to the grub2.cfg file you have run grub2-mkconfig and ensure the output is directed to /boot/grub2/grub.cfg as shown in the example below.

[root@demo1 default]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.8.13-35.3.1.el7uek.x86_64
Found initrd image: /boot/initramfs-3.8.13-35.3.1.el7uek.x86_64.img
Warning: Please don't use old title `Oracle Linux Server, with Unbreakable Enterprise Kernel 3.8.13-35.3.1.el7uek.x86_64' for GRUB_DEFAULT, use `Advanced options for Oracle Linux Server>Oracle Linux Server, with Unbreakable Enterprise Kernel 3.8.13-35.3.1.el7uek.x86_64' (for versions before 2.00) or `gnulinux-advanced-8f652ccf-3540-4549-9a5c-1d126e882d35>gnulinux-3.8.13-35.3.1.el7uek.x86_64-advanced-8f652ccf-3540-4549-9a5c-1d126e882d35' (for 2.00 or later)
Found linux image: /boot/vmlinuz-0-rescue-782e1cbce43c4c9d8829bd4addd5f09d
Found initrd image: /boot/initramfs-0-rescue-782e1cbce43c4c9d8829bd4addd5f09d.img
[root@demo1 default]#

If we now reboot the machine and check again what the scheduler is that is applied on sda we can see that cfq has been selected as the default scheduler for this device:

[root@demo1 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[root@demo1 ~]#

When working with a grub bootloader you can directly change the scheduler in /etc/grub.conf however, withthe introduction of grub2 this is no longer an option and you need to take the above mentioned steps to change the I/O scheduler in Oracle Linux. 

Monday, September 21, 2015

Oracle Enterprise Manager query table space sizes

Oracle Enterprise Manager provides you the ideal solution to manage a large number of targets. All information about the targets, for example Oracle databases, is stored in the Oracle Enterprise Manager Repository database. What makes it interesting is that you can query the database with SQL and get information out of it quickly, showing you exactly what you need.

In the below example we do query the total size of the Oracle database tablesize per database. The query provides a list of all databases that are registered as a target in OEM in combination with the name of the server it is running on and the total size of the table space.


The code is also available on github where you can find a larger collection of scripts. This scripting repository will be updated continuously so everyone is able to make use of the scripts.

Dual node SSH tunnel with putty

When connecting to a remote Linux server over SSH you have the option to create a tunnel from the remote server back to your local workstation. This can be very handy in case you, for example, need to map the port from the remote server to a localhost port on your workstation. For example, if the only allowed connection is SSH to the server and port 1521 is running on the server for the database you will not be able to remotely connect to port 1521 directly. You can use a tunnel over port 22 (ssh) and create a tunnel within this session to your local machine so you can connect to localhost:1521 and communicate (via the SSH tunnel) with the database.

The above use is quite straight forward, when using a Linux workstation creating a tunnel is quite straightforward, when using Windows with putty this is also done quite easy with creating a tunnel profile in putty. It gets more interesting when you have the below shown configuration.

In this situation you have a windows laptop which is only able to connect to the “jump server” via SSH. However, when you like to make use of Oracle SQL developer and connect to the database on the database server you will not be able to connect directly on port 1521 or create a direct tunnel between your workstation and port 1521 via a tunnel.

You will need to create a tunnel between your workstation to the “jump server” and from the “jump server” to the database server. This is in essence a double hop tunnel. To arrange this take the following steps:

  • Configure on your windows workstation a putty tunnel where the source will be 45678 and the destination is localhost:45678  (see screenshot below)
  • Connect with this configuration from your workstation to the “jump server”.
  • Execute the following command while on the “jump server” shell: ssh -L 45678:database-server root@database-server
  • While on your workstation connect Oracle SQL Developer to localhost: 45678

This should enable you to use Oracle SQL Developer locally by making use of a dual hop SSH tunnel to the database server via the “jump server”.

Thursday, September 10, 2015

Exadata check IB cables

One of the things that helps make the Exadata perform at the speed it is performing is the fact that the connections between the compute nodes and the storage nodes is based upon Infiniband . In some cases also other, external components, are connected to the Exadata by making use of Infiniband . Infiniband is an intergrated and vital part of Exadata. The below presentation gives a quick introduction into the Infiniband cabling of a full rack Exadata and how you can connect other Oracle Engineered systems to a Exadata Infiniband  fabric.

In normal situations all cables should be present in an Exadata. However, in some cases due to some reason a cable might have been unplugged. As datacenters are often not near the location where engineers are it can be handy to have the ability to check the state of the cables from the commandline without the need to be physically present in the datacenter. The below bash script enables you to check the state of the cables on both the compute as well as the storage servers.

for ib_cable in `ls /sys/class/net | grep ^ib`; do 
  printf "$ib_cable: "; cat /sys/class/net/$ib_cable/carrier; 

The output will tell you if the cable is present per Infiniband interface. A 1 indicates that a cable is found a 0 indicates that no cable is found.

Friday, September 04, 2015

Oracle EM12C querie virtual machines

Oracle Enterprise Manager, partially in combination with oracle VM manager is able to monitor and manage your Oracle VM landscape and the virtual machines that are deployed on this. One of the advantages of Oracle Enterprise Manager is that all the information associated with known targets is that it is stored in a database. This means that with some simple SQL statements you are able to query information, in the below sample code we do a simple query on the Oracle Enterprise Manager database to get information about the virtual machines we have deployed on Oracle Enterprise Manager in combination with the location where they are in the cluster.

This query can be very handy in case you need to make a quick impact analysis and are in need to know in which datacenter, in which pool, on which physical server specific virtual machines are deployed.

   v_ovm_vm.ovm_display_name         AS VM_NAME,
   v_ovm_vm.kernel_ver               AS VM_KERNEL,
   v_ovm_serverpool.ovm_display_name AS VMSERVER_POOL,
   v_ovm_zone.ovm_display_name       AS VMSERVER_ZONE,
   MGMT$VT_VM_SW_CFG v_ovm_vm,
   MGMT$VT_VSP_CONFIG v_ovm_serverpool,
   MGMT$VT_ZONE_CONFIG v_ovm_zone,
   MGMT$VT_VS_SW_CFG v_ovm_server
   v_ovm_vm.vsp_uuid = v_ovm_serverpool.vsp_uuid
   AND v_ovm_serverpool.zone_uuid = v_ovm_zone.zone_uuid
   AND v_ovm_vm.VS_UUID = v_ovm_server.vs_uuid
   ORDER BY 3,4,5,1

The code is also available on github where you can find a larger collection of scripts. This scripting repository will be updated continuously so everyone is able to make use of the scripts.

Monday, August 31, 2015

Oracle Linux local firewalls -- firewalld

A question that comes to me quite often is the question if local firewalls should be used in Linux. Often the question comes from administrators of the operating system who do not “like” to maintain the firewalls all locally and would like to have the network team to take care of this on a network level. This question is also often posed by DBA’s and developers who need to access the systems often and are involved in changes to the systems. Every time they need to have a port open or a new route between machines added they have to go through a change management process in relation to local firewalls and would rather see this is not implemented.

As I do work on Linux (and other operating systems) regularly in a changing role I do sympathies with the statements and do understand the questions and reasons behind this. From one of my roles as a security consultant and architect I do however not agree with the statement that security should be managed by the network team and local firewalls are nothing more than an annoyance.

A recent post on a mailing list around a different subject gave me the opportunity to again come back to my topic of defending the use of local firewalls.

I am particularly interested in confirming that low-risk servers can’t be used as a stepping stone to attack a high-risk server, or as a means of unauthorised data egress.

The above quote is out of context due to sharing restrictions, however, the full mail started a discussion on the topic of local firewalls. Taking the above quote already provides some clues on why local firewalls are important.

If we take the below architecture deployment for a “standard” implementation of an Oracle based landscape. The landscape is using Oracle Linux and hosts Oracle software.

Most implementations are based upon the above principle, they will have a DMZ which hosts the external facing services and those machines will connect to the back-end of the application which in our case is an Oracle RAC database implementation in combination with an Oracle NoSQL Key-Value store.

In principle nothing is wrong with this picture, if all is done correctly the external facing ports will only allow traffic on the ports that are needed and the firewall will block all other traffic. The same will be applicable for the firewall between the DMZ and the back-end systems. However, in case that an attacker is able to gain access to one of the hosts on the DMZ you will have the situation that the external firewall is not protecting you against communication between the compromised host and the other hosts in this specific VLAN. The attacker will be able to communicate extremely more easy between the hosts by making use of the lack of a firewall between those hosts.

The below implementation is showing a more rigid model in which all host have a local firewall, in case one of the hosts in compromised the ability of an attacker is limited to that host only and the options to connect to other hosts is extremely limited.

Due to this the administrators and the security team have much more time to fight back against the attack and the overall security of the landscape is significantly raised.

Many people opt against implementing local firewall rules due to the fact that it introduces a management overhead. In my personal opinion the use of local firewalls should however be promoted as a standard and only being considered to not implement by exception rather than the other way around. Currently you see that the default is to not implement it and only in some exceptional cases customers decide to implement it.

Now, if you implement local firewalls on Linux the most common solution to look for is iptables. However, as from Oracle Linux 7 the default firewall is no longer iptables anymore, it will be using a firewalld firewall instead. Firewalld is using the well known netfilter solution. Netfilter is the packet filtering framework inside the Linux 2.4.x and later kernel series. A netfilter kernel component consisting of a set of tables in memory for the rules that the kernel uses to control network packet filtering.

Oracle states the following about firewalld-based firewalls within the OL7 documentation:

The firewalld-based firewall has the following advantages over an iptables-based firewall:

  1. Unlike the iptables and ip6tables commands, using firewalld-cmd does not restart the firewall and disrupt established TCP connections.
  2. firewalld supports dynamic zones, which allow you to implement different sets of firewall rules for systems such as laptops that can connect to networks with different levels of trust. You are unlikely to use this feature with server systems.
  3. firewalld supports D-Bus for better integration with services that depend on firewall configuration.

Especially the point about dynamic zones is very interesting if you need to maintain a large set of firewalls. You can create zones and apply them for the systems where they are needed, however ship all your installations with it. For example a default installation could contain a set of rules (zones) for a number of situations in your enterprise footprint, you activate them where they are appropriate.

To configure the settings of firewalld you can make use of a GUI by calling firewall-config or you can use the CLI by calling firewall-cmd

Monday, August 17, 2015

Joining a startup

Ever thought about working for a startup, ever decided not to do it because you thought it was finically a gamble and a risk considered to big? You might have done the right thing, if you are not able to take a financial risk and you have a family to support then joining a startup might not be the best choice for you. Specially considering that 90% of the startups fail and a lot of them fail before they make profit or even are able to pay a reasonable amount to the people who work for them.

However, for those who do have the option to take a financial risk and / or have an entrepreneurial way of thinking, for those who love the drive of people working at a startup, the upside can be big. Some startups become good running businesses and some, a very small amount of the remaining 10% will become very big.

Having joined a startup from the beginning and it becomes big, very big, the value per employee can skyrocket. And a large part of those companies do recognise the endless efforts the employees have put into it to make this happen.

As you can see in the above bubble chart, some companies have a very high value per employee. So, yes, working for a startup can be a risk. However, if it becomes a success it can become a very big success with a big financial gain. You should however never join a startup with this as a goal, join a startup for the fun and for the mindset that lives in such a company.

Tuesday, June 30, 2015

Oracle virtualbox failed to open /dev/vboxnetctl - resolved

Virtualbox is the virtualization solution from Oracle to virtualize operating systems on your workstation. Used by many to run for example test and development servers locally. I use it on most of my workstations for this purpose, recently I had a new issue on my macbook. When trying to setup an internal network I was presented with the following error:

Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory

For some reason Virtualbox has gone south. Most solutions presented online involved restarting Oracle Virtualbox to resolve the issue. Restarting Oracle virtualbox under Mac OS can be done by executing sudo ./VirtualBox restart in the /Library/StartupItems/VirtualBox directory. For some reason this was causing another issue.

The restart command presented me with the below message instead of a clean restart:

Unloading VBoxDrv.kext
(kernel) Can't remove kext org.virtualbox.kext.VBoxDrv; services failed to terminate - 0xe00002c7.
Failed to unload org.virtualbox.kext.VBoxDrv - (iokit/common) unsupported function.
-v Error: Failed to unload VBoxDrv.kext
-f VirtualBox

After checking I stopped Virtualbox I found out that some background processes where still operational. Killing them and rerunning the restart command resolved all the issues and Virtualbox was able to function again as can be be expected from it. The restart command after killing the background processes presented me with the below:

Unloading VBoxDrv.kext
/Applications/VirtualBox.app/Contents/MacOS/VBoxAutostart => /Applications/VirtualBox.app/Contents/MacOS/VBoxAutostart-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxBalloonCtrl => /Applications/VirtualBox.app/Contents/MacOS/VBoxBalloonCtrl-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxDD2GC.gc => /Applications/VirtualBox.app/Contents/MacOS/VBoxDD2GC.gc-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxDDGC.gc => /Applications/VirtualBox.app/Contents/MacOS/VBoxDDGC.gc-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxExtPackHelperApp => /Applications/VirtualBox.app/Contents/MacOS/VBoxExtPackHelperApp-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxHeadless => /Applications/VirtualBox.app/Contents/MacOS/VBoxHeadless-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxManage => /Applications/VirtualBox.app/Contents/MacOS/VBoxManage-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxNetAdpCtl => /Applications/VirtualBox.app/Contents/MacOS/VBoxNetAdpCtl-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxNetDHCP => /Applications/VirtualBox.app/Contents/MacOS/VBoxNetDHCP-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxSVC => /Applications/VirtualBox.app/Contents/MacOS/VBoxSVC-amd64
/Applications/VirtualBox.app/Contents/MacOS/VBoxXPCOMIPCD => /Applications/VirtualBox.app/Contents/MacOS/VBoxXPCOMIPCD-amd64
/Applications/VirtualBox.app/Contents/MacOS/VMMGC.gc => /Applications/VirtualBox.app/Contents/MacOS/VMMGC.gc-amd64
/Applications/VirtualBox.app/Contents/MacOS/VirtualBox => /Applications/VirtualBox.app/Contents/MacOS/VirtualBox-amd64
/Applications/VirtualBox.app/Contents/MacOS/VirtualBoxVM => /Applications/VirtualBox.app/Contents/MacOS/VirtualBoxVM-amd64
/Applications/VirtualBox.app/Contents/MacOS/vboxwebsrv => /Applications/VirtualBox.app/Contents/MacOS/vboxwebsrv-amd64
Loading VBoxDrv.kext
Loading VBoxUSB.kext
Loading VBoxNetFlt.kext
Loading VBoxNetAdp.kext

Thursday, May 28, 2015

Exalogic based Big Data Strategy

I recently published a whitepaper on Exalogic based Big Data Strategy which go's primarily into how you can capture the data from, as an example, sensors. Most big data strategies go into how you can handle the data as soon as it lands inside you Hadoop cluster. There is however also a need for a clear strategy on how you capture the data before you can use it.

Not having a clear strategy on capturing data to be used within a wider big data strategy can be killing for a big data project. This paper go's into how you can use Oracle Exalogic in this process to ensure that you have a flexible and well performing solution for data acquisition.

You can find the article at the Capgemini website or you can read it below.

Oracle building blocks for future enterprise services

As we observe the direction enterprises are heading into with regards of their IT footprint we can observe a number of interesting trends. None of them are new, however, we see them picking up more and more momentum and becoming the new standard within enterprise IT. If we take a look at some of the directions enterprises are moving into and what the demands are from the internal users in the form of business departments we see the challenges faced.

The questions asked by the business are in some cases against the traditional way of working and doing things. To be able to implement them and satisfy the business some radical change is needed in some cases. Not only in the way IT departments work, also in the way the entire IT landscape is architected and how the entire IT landscape traditionally is build.

To be able to move from a traditional way of working, in most cases, a combination of application as well as infrastructure modernization and rationalization is needed.

To read the full blogpost please visit the Capgemini.com Oracle blog.

Tuesday, April 14, 2015

Calculate DOM0 memory size

When using Oracle VM server as a virtualization platform you will have to ensure that DOM0 has enough memory allocated to it. Dom0 is the initial domain started by the Xen hypervisor on boot. Dom0 is an abbrevation of "Domain 0" (sometimes written as "domain zero" or the "host domain"). Dom0 is a privileged domain that starts first and manages the DomU unprivileged domains.

To ensure the correct amount of memory is allocated to DOM0 Oracle propagates the below mentioned algorithm to be used:

dom0_mem = 502 + int(physical_mem * 0.0205)

As an example this would mean that a server with 2 GB of physical memory would need 543MB of memory allocated to DOM0. A server with 32 GB of physical memory would need 1173MB of memory allocated to DOM0..

To change the Dom0 memory allocation, edit the /boot/grub/grub.conf file on the Oracle VM Server and change the dom0_mem parameter, for example, to change the memory allocation to 1024 MB, edit the file to be:

kernel /xen.gz console=com1,vga com1=38400,8n1 dom0_mem=1024M