Tuesday, October 20, 2015

Oracle Enterprise Manager - This report has saved copies

When using Oracle Enterprise Manager to manage your IT footprint you most likely also want to make use of the reporting functions within Oracle Enterprise Manager. Within the latest releases Oracle tries to push to using Oracle BI and not the older reporting options. However, many deployments still use the "old" method of reporting (which works fine in most cases).

In some cases you do want to make a change to a report you have created and might run into a message like this: "You have chosen to edit report "xxxx". This report has saved copies. Do you want to edit the report with limited editing capabilities?".

This means that you cannot change the definition of the report while there are still "old" copies. To resolve this you have to first remove the copies before you can do your changes. To do so, login as a user who has the rights to change the report and open the report itself (not in edit mode, open it in view mode). You will see, as shown in the below screenshot, the number of saved copies.

When you click on the number you will be guided to a page like the one below:

You will have to delete all saved copies of this report. When you have done so and you enter the edit mode of the report again you will see that you have full editing capabilities and are able to make all changes required. 

Thursday, October 08, 2015

Oracle Linux - NuPIC AI core installation

NuPIC is an open source project based on a theory of neocortex called Hierarchical Temporal Memory (HTM). Parts of HTM theory have been implemented, tested, and used in applications, and other parts of HTM theory are still being developed. Today the HTM code in NuPIC can be used to analyze streaming data. It learns the time-based patterns in data, predicts future values, and detects anomalies. HTM is a set of algorithms which model the functionality of the neocortex in the human brain. HTM Theory is the key to unlocking Intelligent Applications and Machines. NupIC is the core product from numenta and is opensource and available to all who like to test with it, build upon it or add to it.

For intelligent applications NuPIC is great as a starting point of your development. However, a thing to keep in mind is that this field of computer science is new, HTM is fairly new. Or in the words from Jeff Hawkins: "This stuff is not easy. I can assure you that once you understand it, you will see a beauty in it. But most people take months to deeply understand the CLA. The tasks of creating hierarchies of CLAs and adding in motor capabilities are very difficult. Even just using the CLA in its current form is not trivial due to the learning required."

When you like to run NuPic on Oracle Linux a number of steps might be a bit different from the installation on a MacBook. Also a couple of dependencies are in place before you can install NuPic on Oracle Linux which are: Python 2.7, Python development headers, pip, wheel, numpy and C++ compiler like gcc or clang.

Python development headers
Next to Python, which most likely will be shipping with your Oracle Linux installation you will have to make sure that you have the Python development headers. You can check if this is installed by executing the below command. In my case Python development headers was already installed.

[root@localhost ~]# rpm -qa | grep python-devel
[root@localhost ~]#

In case you do not get a result you will have to install the Python development headers by executing a yum install command as shown below:

[root@localhost ~]# yum install python-devel

one of the requirements to be able to install NupiC is to install pip. pip is a package management system used to install and manage software packages written in Python. Many packages can be found in the Python Package Index (PyPI). If you have installed the Python setuptools which I describe in this blogpost the installation of pip can be done by using the easy_install command which is part of the setuptools distribution.

[root@localhost ~]# easy_install pip
Searching for pip
Best match: pip 6.1.1
Adding pip 6.1.1 to easy-install.pth file
Installing pip script to /usr/bin
Installing pip3.4 script to /usr/bin
Installing pip3 script to /usr/bin

Using /usr/lib/python2.7/site-packages
Processing dependencies for pip
Finished processing dependencies for pip
[root@localhost ~]#

wheel is required as a dependency. Wheel(s) are a built-package format for Python. A wheel is a ZIP-format archive with a specially formatted filename and the .whl extension. It is designed to contain all the files for a PEP 376 compatible install in a way that is very close to the on-disk format. Many packages will be properly installed with only the “Unpack” step (simply extracting the file onto sys.path), and the unpacked archive preserves enough information to “Spread” (copy data and scripts to their final locations) at any later time. You can install wheel with the just installed pip by executing the below command. Which resulted in my case in some warnings which you can (should) resolve however are not blocking the installation.

[root@localhost ~]# pip install wheel
/usr/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
You are using pip version 6.1.1, however version 7.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting wheel
/usr/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
    100% |ââââââââââââââââââââââââââââââââ| 65kB 1.8MB/s
Installing collected packages: wheel
Successfully installed wheel-0.26.0
[root@localhost ~]#

it will not come as a surprise that NumPy is required to be installed on the system. NumPy is is the fundamental package for scientific computing with Python. It contains among other things; a powerful N-dimensional array object, sophisticated (broadcasting) functions, tools for integrating C/C++ and Fortran code, useful linear algebra, Fourier transform, and random number capabilities. Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases

Numpy can be installed by executing the command:
pip install numpy

As the core of Nupic is written in C++ you will need a C++ compiler. The obvious choice in this is GCC which most likely is already installed on your system. You can check the availability with the below command, which shows in my example that it is installed.

[root@localhost ~]# rpm -qa | grep gcc
[root@localhost ~]#

In case it is not installed you can execute a yum install command to install gcc on your Oracle Linux machine. One small, however important note, gcc should be GCC 4.8.

Installing NuPic
After you ensured all dependencies are done you can install NuPic. The installation of NuPic on Oracle Linux is a bit different than the installation on for example a Mac. Reason for this is that The nupic.bindings binary distribution is not stored on PyPi along with the OS X distribution. NuPIC uses the wheel binary format, and PyPi does not support hosting Linux wheel files. This forces you to download the wheel file directly from Numenta and not from PyPi.

pip install https://s3-us-west-2.amazonaws.com/artifacts.numenta.org/numenta/nupic.core/releases/nupic.bindings/nupic.bindings-0.2.1-cp27-none-linux_x86_64.whl
pip install nupic

If all is ok the "pip install nupic" command should work like a charm. However, in case you run into a compiler error like the one shown below it might be that you are missing some additional prerequisites.

cc -c /tmp/tmphmvPkY/vers.cpp -o tmp/tmphmvPkY/vers.o --std=c++11
    cc: error trying to exec 'cc1plus': execvp: No such file or directory
    *WARNING* no libcapnp detected. Will download and build it from source now. If you have C++ Cap'n Proto installed, it may be out of date or is not being detected. Downloading and building libcapnp may take a while.
    fetching https://capnproto.org/capnproto-c++- into /tmp/pip-build-PHQZgs/pycapnp/bundled
    configure: error: *** A compiler with support for C++11 language features is required.

To resolve this issue you will need to do a additional install of gcc-c++ by executing

[root@localhost ~]# yum install gcc-c++

Testing NuPic
to ensure your installation of Nupic was successful you can run a test with the test units provided in the github repository. Execute the py.test against test/unit/ which can be found in the github repository. This should look like the example below.

[root@localhost nupic-master]# py.test tests/unit/
=== test session starts  ===
platform linux2 -- Python 2.7.5 -- pytest-2.5.1
plugins: cov, xdist
collected 844 items / 2 skipped

tests/unit/nupic/utils_test.py ......
tests/unit/nupic/algorithms/anomaly_likelihood_jeff_test.py ...ss..
tests/unit/nupic/algorithms/anomaly_likelihood_test.py ....................
tests/unit/nupic/algorithms/anomaly_test.py ..............
tests/unit/nupic/algorithms/cells4_test.py .
tests/unit/nupic/algorithms/cla_classifier_diff_test.py ...................
tests/unit/nupic/algorithms/cla_classifier_test.py ...................
tests/unit/nupic/algorithms/fast_cla_classifier_test.py ...................
tests/unit/nupic/algorithms/knn_classifier_test.py .....s
tests/unit/nupic/algorithms/nab_detector_test.py ..
tests/unit/nupic/algorithms/sp_overlap_test.py .s.s
tests/unit/nupic/algorithms/svm_test.py ..s
tests/unit/nupic/algorithms/tp10x2_test.py .
tests/unit/nupic/data/aggregator_test.py .
tests/unit/nupic/data/dictutils_test.py ......
tests/unit/nupic/data/fieldmeta_test.py .....
tests/unit/nupic/data/file_record_stream_test.py ......
tests/unit/nupic/data/filters_test.py s
tests/unit/nupic/data/functionsource_test.py ......
tests/unit/nupic/data/inference_shifter_test.py ........
tests/unit/nupic/data/record_stream_test.py .......
tests/unit/nupic/data/utils_test.py .......
tests/unit/nupic/data/generators/anomalyzer_test.py ...........
tests/unit/nupic/data/generators/pattern_machine_test.py .........
tests/unit/nupic/data/generators/sequence_machine_test.py .....
tests/unit/nupic/encoders/adaptivescalar_test.py .......
tests/unit/nupic/encoders/category_test.py ..
tests/unit/nupic/encoders/coordinate_test.py ................
tests/unit/nupic/encoders/date_test.py ........
tests/unit/nupic/encoders/delta_test.py .....
tests/unit/nupic/encoders/geospatial_coordinate_test.py ...........
tests/unit/nupic/encoders/logenc_test.py ......
tests/unit/nupic/encoders/multi_test.py ..
tests/unit/nupic/encoders/pass_through_encoder_test.py ....
tests/unit/nupic/encoders/random_distributed_scalar_test.py ...............
tests/unit/nupic/encoders/scalar_test.py .............
tests/unit/nupic/encoders/scalarspace_test.py .
tests/unit/nupic/encoders/sdrcategory_test.py ...
tests/unit/nupic/encoders/sparse_pass_through_encoder_test.py ....
tests/unit/nupic/engine/network_test.py .........
tests/unit/nupic/engine/syntactic_sugar_test.py .....
tests/unit/nupic/engine/unified_py_parameter_test.py ..
tests/unit/nupic/frameworks/opf/clamodel_classifier_helper_test.py ......................
tests/unit/nupic/frameworks/opf/clamodel_test.py ......
tests/unit/nupic/frameworks/opf/opf_metrics_test.py ...............................
tests/unit/nupic/frameworks/opf/previous_value_model_test.py ......
tests/unit/nupic/frameworks/opf/safe_interpreter_test.py ........
tests/unit/nupic/frameworks/opf/two_gram_model_test.py .....
tests/unit/nupic/frameworks/opf/common_models/cluster_params_test.py .
tests/unit/nupic/math/array_algorithms_test.py ...
tests/unit/nupic/math/cast_mode_test.py s
tests/unit/nupic/math/lgamma_test.py .
tests/unit/nupic/math/nupic_random_test.py .............
tests/unit/nupic/math/sparse_binary_matrix_test.py ............s............
tests/unit/nupic/math/sparse_matrix_test.py ...s...............................
tests/unit/nupic/regions/anomaly_region_test.py .
tests/unit/nupic/regions/knn_anomaly_classifier_region_test.py ....................
tests/unit/nupic/regions/pyregion_test.py ....
tests/unit/nupic/regions/record_sensor_region_test.py .
tests/unit/nupic/regions/regions_spec_test.py s...s......
tests/unit/nupic/research/connections_test.py .............
tests/unit/nupic/research/inhibition_object_test.py s
tests/unit/nupic/research/sp_learn_inference_test.py s
tests/unit/nupic/research/spatial_pooler_boost_test.py ..
tests/unit/nupic/research/spatial_pooler_compatability_test.py ....ss..
tests/unit/nupic/research/spatial_pooler_compute_test.py ..
tests/unit/nupic/research/spatial_pooler_cpp_api_test.py ..............................
tests/unit/nupic/research/spatial_pooler_py_api_test.py ..............................
tests/unit/nupic/research/spatial_pooler_unit_test.py s.................................
tests/unit/nupic/research/temporal_memory_test.py ...........................
tests/unit/nupic/research/tp10x2_test.py ....
tests/unit/nupic/research/tp_constant_test.py ...
tests/unit/nupic/research/tp_test.py ....
tests/unit/nupic/research/monitor_mixin/metric_test.py ..
tests/unit/nupic/research/monitor_mixin/trace_test.py ..
tests/unit/nupic/support/configuration_test.py ............s....................
tests/unit/nupic/support/custom_configuration_test.py .........s..............
tests/unit/nupic/support/decorators_test.py ....
tests/unit/nupic/support/object_json_test.py ...............
tests/unit/nupic/support/consoleprinter_test/consoleprinter_test.py .
=== 825 passed, 21 skipped in 100.95 seconds ===
[root@localhost nupic-master]#

This should enable you to start with exploring NuPic

Tuesday, October 06, 2015

Oracle Linux - Install Python setuptools

When working with Python and when you like to make your life more easy when installing new modules and functions it is commonly a best practice to use things like for example pip and/pr Python setuptools. Python setuptools will help you to easily download, build, install, upgrade, and uninstall Python packages. The setup of the setuptools on Oracle Linux is basically a single command to get things working. Executing the command will download a python script and execute it. This script will ensure the setuptool will be downloaded and installed correctly on your system.

You can download and execute the script manually and in two steps, you can also do this in one go and ensure that you only need a single command to install the setuptools on Oracle Linux. Below is an example of the single command which involves a wget and sending the result to Python for execution.

[root@localhost ~]# wget https://bootstrap.pypa.io/ez_setup.py -O - | python
--2015-10-06 16:06:27--  https://bootstrap.pypa.io/ez_setup.py
Resolving bootstrap.pypa.io (bootstrap.pypa.io)...
Connecting to bootstrap.pypa.io (bootstrap.pypa.io)||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11434 (11K) [text/x-python]
Saving to: âSTDOUTâ

100%[==================================>] 11,434      --.-K/s   in 0s

2015-10-06 16:06:28 (534 MB/s) - written to stdout [11434/11434]

Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-18.3.2.zip
Extracting in /tmp/tmpuwKkuT
Now working in /tmp/tmpuwKkuT/setuptools-18.3.2
Installing Setuptools
running install
running bdist_egg
running egg_info
writing requirements to setuptools.egg-info/requires.txt
writing setuptools.egg-info/PKG-INFO
writing top-level names to setuptools.egg-info/top_level.txt
writing dependency_links to setuptools.egg-info/dependency_links.txt
writing entry points to setuptools.egg-info/entry_points.txt
reading manifest file 'setuptools.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'setuptools.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
copying easy_install.py -> build/lib
creating build/lib/_markerlib
copying _markerlib/__init__.py -> build/lib/_markerlib
copying _markerlib/markers.py -> build/lib/_markerlib
creating build/lib/pkg_resources
copying pkg_resources/__init__.py -> build/lib/pkg_resources
creating build/lib/setuptools
copying setuptools/__init__.py -> build/lib/setuptools
copying setuptools/archive_util.py -> build/lib/setuptools
copying setuptools/compat.py -> build/lib/setuptools
copying setuptools/depends.py -> build/lib/setuptools
copying setuptools/dist.py -> build/lib/setuptools
copying setuptools/extension.py -> build/lib/setuptools
copying setuptools/lib2to3_ex.py -> build/lib/setuptools
copying setuptools/msvc9_support.py -> build/lib/setuptools
copying setuptools/package_index.py -> build/lib/setuptools
copying setuptools/py26compat.py -> build/lib/setuptools
copying setuptools/py27compat.py -> build/lib/setuptools
copying setuptools/py31compat.py -> build/lib/setuptools
copying setuptools/sandbox.py -> build/lib/setuptools
copying setuptools/site-patch.py -> build/lib/setuptools
copying setuptools/ssl_support.py -> build/lib/setuptools
copying setuptools/unicode_utils.py -> build/lib/setuptools
copying setuptools/utils.py -> build/lib/setuptools
copying setuptools/version.py -> build/lib/setuptools
copying setuptools/windows_support.py -> build/lib/setuptools
creating build/lib/pkg_resources/_vendor
copying pkg_resources/_vendor/__init__.py -> build/lib/pkg_resources/_vendor
creating build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/__about__.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/__init__.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/_compat.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/_structures.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/specifiers.py -> build/lib/pkg_resources/_vendor/packaging
copying pkg_resources/_vendor/packaging/version.py -> build/lib/pkg_resources/_vendor/packaging
creating build/lib/setuptools/command
copying setuptools/command/__init__.py -> build/lib/setuptools/command
copying setuptools/command/alias.py -> build/lib/setuptools/command
copying setuptools/command/bdist_egg.py -> build/lib/setuptools/command
copying setuptools/command/bdist_rpm.py -> build/lib/setuptools/command
copying setuptools/command/bdist_wininst.py -> build/lib/setuptools/command
copying setuptools/command/build_ext.py -> build/lib/setuptools/command
copying setuptools/command/build_py.py -> build/lib/setuptools/command
copying setuptools/command/develop.py -> build/lib/setuptools/command
copying setuptools/command/easy_install.py -> build/lib/setuptools/command
copying setuptools/command/egg_info.py -> build/lib/setuptools/command
copying setuptools/command/install.py -> build/lib/setuptools/command
copying setuptools/command/install_egg_info.py -> build/lib/setuptools/command
copying setuptools/command/install_lib.py -> build/lib/setuptools/command
copying setuptools/command/install_scripts.py -> build/lib/setuptools/command
copying setuptools/command/register.py -> build/lib/setuptools/command
copying setuptools/command/rotate.py -> build/lib/setuptools/command
copying setuptools/command/saveopts.py -> build/lib/setuptools/command
copying setuptools/command/sdist.py -> build/lib/setuptools/command
copying setuptools/command/setopt.py -> build/lib/setuptools/command
copying setuptools/command/test.py -> build/lib/setuptools/command
copying setuptools/command/upload_docs.py -> build/lib/setuptools/command
copying setuptools/script (dev).tmpl -> build/lib/setuptools
copying setuptools/script.tmpl -> build/lib/setuptools
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
copying build/lib/easy_install.py -> build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/_markerlib
copying build/lib/_markerlib/__init__.py -> build/bdist.linux-x86_64/egg/_markerlib
copying build/lib/_markerlib/markers.py -> build/bdist.linux-x86_64/egg/_markerlib
creating build/bdist.linux-x86_64/egg/pkg_resources
copying build/lib/pkg_resources/__init__.py -> build/bdist.linux-x86_64/egg/pkg_resources
creating build/bdist.linux-x86_64/egg/pkg_resources/_vendor
copying build/lib/pkg_resources/_vendor/__init__.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor
creating build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/__about__.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/__init__.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/_compat.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/_structures.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/specifiers.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
copying build/lib/pkg_resources/_vendor/packaging/version.py -> build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging
creating build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/__init__.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/archive_util.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/compat.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/depends.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/dist.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/extension.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/lib2to3_ex.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/msvc9_support.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/package_index.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/py26compat.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/py27compat.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/py31compat.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/sandbox.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/site-patch.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/ssl_support.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/unicode_utils.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/utils.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/version.py -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/windows_support.py -> build/bdist.linux-x86_64/egg/setuptools
creating build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/__init__.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/alias.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/bdist_egg.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/bdist_rpm.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/bdist_wininst.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/build_ext.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/build_py.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/develop.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/easy_install.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/egg_info.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/install.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/install_egg_info.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/install_lib.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/install_scripts.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/register.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/rotate.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/saveopts.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/sdist.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/setopt.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/test.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/command/upload_docs.py -> build/bdist.linux-x86_64/egg/setuptools/command
copying build/lib/setuptools/script (dev).tmpl -> build/bdist.linux-x86_64/egg/setuptools
copying build/lib/setuptools/script.tmpl -> build/bdist.linux-x86_64/egg/setuptools
byte-compiling build/bdist.linux-x86_64/egg/easy_install.py to easy_install.pyc
byte-compiling build/bdist.linux-x86_64/egg/_markerlib/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/_markerlib/markers.py to markers.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/__about__.py to __about__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/_compat.py to _compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/_structures.py to _structures.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/specifiers.py to specifiers.pyc
byte-compiling build/bdist.linux-x86_64/egg/pkg_resources/_vendor/packaging/version.py to version.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/archive_util.py to archive_util.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/compat.py to compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/depends.py to depends.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/dist.py to dist.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/extension.py to extension.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/lib2to3_ex.py to lib2to3_ex.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/msvc9_support.py to msvc9_support.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/package_index.py to package_index.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/py26compat.py to py26compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/py27compat.py to py27compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/py31compat.py to py31compat.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/sandbox.py to sandbox.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/site-patch.py to site-patch.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/ssl_support.py to ssl_support.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/unicode_utils.py to unicode_utils.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/utils.py to utils.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/version.py to version.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/windows_support.py to windows_support.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/__init__.py to __init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/alias.py to alias.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/bdist_egg.py to bdist_egg.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/bdist_rpm.py to bdist_rpm.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/bdist_wininst.py to bdist_wininst.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/build_ext.py to build_ext.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/build_py.py to build_py.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/develop.py to develop.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/easy_install.py to easy_install.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/egg_info.py to egg_info.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/install.py to install.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/install_egg_info.py to install_egg_info.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/install_lib.py to install_lib.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/install_scripts.py to install_scripts.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/register.py to register.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/rotate.py to rotate.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/saveopts.py to saveopts.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/sdist.py to sdist.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/setopt.py to setopt.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/test.py to test.pyc
byte-compiling build/bdist.linux-x86_64/egg/setuptools/command/upload_docs.py to upload_docs.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying setuptools.egg-info/zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO
creating dist
creating 'dist/setuptools-18.3.2-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing setuptools-18.3.2-py2.7.egg
Copying setuptools-18.3.2-py2.7.egg to /usr/lib/python2.7/site-packages
Adding setuptools 18.3.2 to easy-install.pth file
Installing easy_install script to /usr/bin
Installing easy_install-2.7 script to /usr/bin

Installed /usr/lib/python2.7/site-packages/setuptools-18.3.2-py2.7.egg
Processing dependencies for setuptools==18.3.2
Finished processing dependencies for setuptools==18.3.2
[root@localhost ~]#

In esscence there is nothing more to installing the Python setuptools on Oracle Linux. A single command will ensure you are in business and you are good to go. 

Oracle Linux - generate MAC address

In most cases you will not need to generate a MAC address. It will come with your network interface or, in cases of a virtual machine, it will be generated for you by the orchestration tooling. However, in some cases you might need to generate a random MAC address. In my case this was when we experimented with the Oracle VM API's and at some point in time we wanted to provide the MAC address to the code that orchestrated the creation and deployment of a new VM.

Generating a new MAC address can be done in multiple ways, the below Python script is just one of the examples, however, it can be intergrated faitly easy into wider Python code or you can call it from a Bash script.

# macgen.py script to generate a MAC address for guests on Xen
import random
def randomMAC():
 mac = [ 0x00, 0x16, 0x3e,
  random.randint(0x00, 0x7f),
  random.randint(0x00, 0xff),
  random.randint(0x00, 0xff) ]
 return ':'.join(map(lambda x: "%02x" % x, mac))
print randomMAC()

This script will simply provide you a complete random MAC address.

Oracle VM - selecting a DR layer

When developing a new solution, including both the application, data-store and infrastructure components, one of the questions to ask is on which layer to build resilience against failure. On which level of the stack will you protect against failure of a component and on which level will your disaster recovery focus. In essence the answer is quite simple, you should ensure that disaster recovery is safeguarded as high as possible in the stack. The true answer is a very complex answer which includes disaster recovery, high availability and maximum availability components. Building a solution which is resilient against failure is a very complex process in which every component needs to be taken into account. However, making sure that you have disaster recovery as high up in the stack as possible will make your life much more easy.

As an example we take the below image which shows a application centered disaster recovery solution based within a virtualized environment with Oracle VM.

Within this solution applications will run in a active active setup in both site A as well as site B. Information between the two sites is kept in sync by making use of the MAA maximum Availability Architecture principles from Oracle. This means that when a site fails the application will still be able to function as it will run on the other site. Users should not face any downtime and should not even be aware that one of the two sites has been lost due to a disaster.

The application centered disaster recovery solution is the most resilient solution against disasters and the loss of a site. However, in some cases it is not feasible to run a architecture as shown above and you would still like to be able to perform a disaster recovery of the virtual machines running within your deployment. A solution to this is making use of block replication on a storage level and allowing your recovery site (site B) to start the VM's in case your site A is lost.

Within this model you will replicate all storage associated with the VM's from site A to a storage repository within site B. In essence this is an exact copy of the VM, however, on site B the machine is in a stopped state. This is also represented in the diagram below where you can more clearly see the replication of storage on the two sides. For this solution you can use storage block replication in a way that your storage appliance is supporting.

In case of a failure you have to ensure that all machines are stopped on site A, after this you can make the storage on site B readable and writable and start the virtual machines. This might not be the most ideal solution in comparison with disaster recovery in the higher levels of the stack, however, in case you are forced to ensure disaster recovery on a infrastructure / VM layer instead of a application level this is a solution that can be used. 

For more information, also view the slidedeck below.

Friday, October 02, 2015

Oracle Linux - detect security issues

When operating a large landscape of Linux machines, in our case a large landscape of Oracle Linux machines security is one of the vital things to keep in mind. In an ideal world all your Linux deployments would be of exactly the same version and contain exactly the same level of patching. In an ideal world no machine would differ from another machine and in this same ideal world you would be able to run a yum update command on all machines and would never face any issue nor would you be required to talk to end-customers or other tech team. However, even though in some situations you are able to maintain such a situation, commonly it is seen that a landscape of servers is equally patched and in some cases servers are not patched for a long period of time. This is not necessarily due to bad maintenance by the Linux administrators, commonly it is related to pressure from the business not to change the systems or not getting approval from a change advisory board.

When it comes down to new or improved functionality which can come with a Linux patch this might be acceptable. However, in case of missing a security patch this might be much more serious. Oracle Enterprise Manager provides, in combination with Yum a solution to show which patches need to be applied on which system. However, also a different solution can be used specifically to identify which security issues have not been addressed in a specific system.

To get an overview of which security vulnerabilities are on your system you can use OpenSCAP. OpenSCAP is based upon SCAP is a line of standards managed by NIST. It was created to provide a standardized approach to maintaining the security of enterprise systems, such as automatically verifying the presence of patches, checking system security configuration settings, and examining systems for signs of compromise.

Oracle provides a OVAL®: Open Vulnerability and Assessment Language XML file which you can use in combination with OpenSCAP to run against your Oracle Linux deployement to get a quick overview of what needs attention on your system and what looks to be correct. If you refer to the Oracle Linux security guide you can find more information around this subject.

After you have installed the needed components with using a Yum command you will have to download the Oracle Linux specific components, or in more detail, the Oracle Linux ELSA file in OVAL format. Oracle provides this file in year files where each year file contains the information on security issues found during that year. As an example, if you wanted to run an audit against the ELSA file of 2015 you need to perform the following steps:

1) Download the ELSA information in the OVAL format and extract it from the bz2 file
wget http://linux.oracle.com/security/oval/com.oracle.elsa-2015.xml.bz2
bzip2 -d com.oracle.elsa-2015.xml.bz2

2) Run the audit. In this case we send both the XML result as well as the HTML report to /tmp however you are free to select any location you want.
oscap oval eval --results /tmp/elsa-results-oval-2015.xml --report /tmp/elsa-report-2015.html ./com.oracle.elsa-2015.xml

This will produce a rather large output to the screen which provides some quick information however the more valuable information can be found in both the XML result as well as in the HTML report which we have send to /tmp . For references the below is the shell output of the audit on the 2015 file which I ran against a Oracle Linux 3.8.13-98.2.2.el7uek.x86_64 implementation:
[root@localhost oscap]# oscap oval eval --results /tmp/elsa-results-oval-2015.xml --report /tmp/elsa-report-2015.html ./com.oracle.elsa-2015.xml
Definition oval:com.oracle.elsa:def:20153073: false
Definition oval:com.oracle.elsa:def:20153072: false
Definition oval:com.oracle.elsa:def:20153071: true

//------------ SNIP SNIP ------------//

Definition oval:com.oracle.elsa:def:20150166: false
Definition oval:com.oracle.elsa:def:20150165: false
Definition oval:com.oracle.elsa:def:201501641: false
Definition oval:com.oracle.elsa:def:20150164: false
Definition oval:com.oracle.elsa:def:20150118: false
Definition oval:com.oracle.elsa:def:20150102: true
Definition oval:com.oracle.elsa:def:20150100: false
Definition oval:com.oracle.elsa:def:20150092: false
Definition oval:com.oracle.elsa:def:20150090: false
Definition oval:com.oracle.elsa:def:20150087: false
Definition oval:com.oracle.elsa:def:20150085: false
Definition oval:com.oracle.elsa:def:20150074: false
Definition oval:com.oracle.elsa:def:20150069: false
Definition oval:com.oracle.elsa:def:20150068: false
Definition oval:com.oracle.elsa:def:20150067: false
Definition oval:com.oracle.elsa:def:20150066: false
Definition oval:com.oracle.elsa:def:20150047: false
Definition oval:com.oracle.elsa:def:20150046: false
Definition oval:com.oracle.elsa:def:20150016: false
Definition oval:com.oracle.elsa:def:20150008: false
Evaluation done.
[root@localhost oscap]# 

3) Review the results (and take action)
You will have to review the results, which can be done by looking at the HTML report or you can run a parser against the XML output to do a more automated way of checking the results. In case you run a large number of Oracle Linux machines and you like to use the oscap way of checking parts of your security you most likely want to have the xml files somewhere in a central location so you do not need to connect all your machines to the public internet and you most likley want to run this in a scheduled form and interpret the results in a automated manner. The HTML file is usable for human reading, however, the XML file is something you would like to parse and use in case you have more then x servers. 

Saturday, September 26, 2015

Oracle Linux - SSH slow login

When you run a lot of test installations of Oracle Linux, like I do on my laptop and my home Oracle VM installation, you do not have them all configured in DNS. When hopping from Linux machine to Linux machine using SSH you do often have the situation that there is a long time between the moment you enter your username and the moment you are asked for the password. Reason for this is that the SSH deamon by default will try to do a DNS lookup to retrieve the machine name from DNS that you use to login to. When running Oracle Linux in a operational environment where you most likely need to have an audit trail this is absolutely a good way of working. However, in case you run multiple lab an play machines in your local environment this is not needed and the wait between the moment you enter your username and the moment you are asked for the password is quickly becoming an annoyance.

To change the behaviour of the SSH deamon you will need to change the configuration file /etc/ssh/sshd_config. You have to ensure that UseDNS no is included in the file. Standard deployment of Oracle Linux is that UseDNS yes is commented out of the configuration. The default behavior is already set to yes so you have to explicitly include "UseDNS no". To esure that the settings are applied restart the service after you made the changes to the file.

[root@localhost ~]# service sshd restart
Redirecting to /bin/systemctl restart  sshd.service
[root@localhost ~]#

Friday, September 25, 2015

Oracle VM - Anti-Affinity Groups

Using Oacle VM to virtualize machines and run them as virtual machines is a great way to reduce the number of physical machines you need to run a large estate of virtual machines. In general cloud administrators should not be worried about the exact physical machine a virtual machine is started. Oracle VM will select, based upon an algorithm on which physical machine the virtual machine will start. However, in some cases you should very much need to worry about the fact where a virtual machine will start. Especially that you do not want a virtual machine started on the same machine as some other specific virtual machines are running.

An example of this is when you run a high availability cluster of databases on virtual machines. What you want to prevent is that all nodes of the cluster run on the same physical hardware. Obvious reason for this is that you do not want your entire cluster to fail in case one physical box fails. For a long time you where required to ensure this manually or use custom scripting. However, within Oracle VM server you now have the Anti-Affinity groups option.

Virtual machines placed in the same Anti-Affinity group will not be started on the same physical hardware. Meaning, if you have a cluster of n virtual machines, all hosting a node of the same database cluster you can place them into one Anti-Affinity group and the algorithm responsible for selecting a physical machine to start the virtual machine will take this into account.

The above image shows a Anti-Affinity group named MyAAGroup which holds 3 virtual machines all members of database cluster. By placing them into the same Anti-Affinity group you will have the insurance that they will never be started on the same physical hardware and by doing so you will honor the Maximum Availability Architecture principles in this specific area. 

Oracle Linux - change IO scheduler

When installing Oracle Linux you will be equipped with a I/O scheduler which is perfectly usable for a database. This is not surprising as Oracle is from origin a database vendor. Input/output (I/O) scheduling is the method that computer operating systems use to decide in which order the block I/O operations will be submitted to storage volumes. I/O scheduling is sometimes called disk scheduling. However, even though the fact that you get a good scheduler there might be a need to change the default scheduler for some reason. A number of reasons can be thought of, depending on the type of I/O your performance can improve by selecting a different scheduler.

The below image shows the overall view on the Linux Storage Stack which includes the scheduler within the Block Layer. This image shows the full stack which includes more components then only the I/O scheduler.

In case you want to check what the current I/O scheduler is for a specific device you can do this by using the following command (for example for sda):

cat /sys/block/sda/queue/scheduler

This will show you the current scheduler that is used. For example, the output could be the one shown in the example below:

[root@demo1 etc]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@demo1 etc]# 

In case you need to change this there are some differences in how to do this. To activate a different scheduler, for example change it to cfq you can cat the new scheduler in by using the below commands:

[root@demo1 etc]#
[root@demo1 etc]# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
[root@demo1 etc]# cat cfq > /sys/block/sda/queue/scheduler
[root@demo1 etc]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[root@demo1 etc]#

As you can see this has changed the scheduler to cfq and no longer deadline is selected as the standard I/O scheduler. To make things persistent you have to change the grub configuration. When you are using grub2 the process is a bit different from using grub which is still in most standing Linux implementations. When using grub2 you have to edit the default grub2 file which is located at /etc/default/grub here you will have to add the new scheduler to GRUB_CMDLINE_LINUX which could look like the example below:

GRUB_CMDLINE_LINUX="crashkernel=auto  vconsole.font=latarcyrheb-sun16 rd.lvm.lv=ol/swap rd.lvm.lv=ol/root vconsole.keymap=us rhgb quiet"

If we, for example, like to make cfq persitant we have to change the line into the example below by add elevator=cfq to it:

GRUB_CMDLINE_LINUX="crashkernel=auto  vconsole.font=latarcyrheb-sun16 rd.lvm.lv=ol/swap rd.lvm.lv=ol/root vconsole.keymap=us rhgb quiet elevator=cfq"

This only has placed the new information into the defaults file and not yet into the grub2.cfg file where it is needed during boot. To ensure this is added to the grub2.cfg file you have run grub2-mkconfig and ensure the output is directed to /boot/grub2/grub.cfg as shown in the example below.

[root@demo1 default]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.8.13-35.3.1.el7uek.x86_64
Found initrd image: /boot/initramfs-3.8.13-35.3.1.el7uek.x86_64.img
Warning: Please don't use old title `Oracle Linux Server, with Unbreakable Enterprise Kernel 3.8.13-35.3.1.el7uek.x86_64' for GRUB_DEFAULT, use `Advanced options for Oracle Linux Server>Oracle Linux Server, with Unbreakable Enterprise Kernel 3.8.13-35.3.1.el7uek.x86_64' (for versions before 2.00) or `gnulinux-advanced-8f652ccf-3540-4549-9a5c-1d126e882d35>gnulinux-3.8.13-35.3.1.el7uek.x86_64-advanced-8f652ccf-3540-4549-9a5c-1d126e882d35' (for 2.00 or later)
Found linux image: /boot/vmlinuz-0-rescue-782e1cbce43c4c9d8829bd4addd5f09d
Found initrd image: /boot/initramfs-0-rescue-782e1cbce43c4c9d8829bd4addd5f09d.img
[root@demo1 default]#

If we now reboot the machine and check again what the scheduler is that is applied on sda we can see that cfq has been selected as the default scheduler for this device:

[root@demo1 ~]# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
[root@demo1 ~]#

When working with a grub bootloader you can directly change the scheduler in /etc/grub.conf however, withthe introduction of grub2 this is no longer an option and you need to take the above mentioned steps to change the I/O scheduler in Oracle Linux. 

Monday, September 21, 2015

Oracle Enterprise Manager query table space sizes

Oracle Enterprise Manager provides you the ideal solution to manage a large number of targets. All information about the targets, for example Oracle databases, is stored in the Oracle Enterprise Manager Repository database. What makes it interesting is that you can query the database with SQL and get information out of it quickly, showing you exactly what you need.

In the below example we do query the total size of the Oracle database tablesize per database. The query provides a list of all databases that are registered as a target in OEM in combination with the name of the server it is running on and the total size of the table space.


The code is also available on github where you can find a larger collection of scripts. This scripting repository will be updated continuously so everyone is able to make use of the scripts.

Dual node SSH tunnel with putty

When connecting to a remote Linux server over SSH you have the option to create a tunnel from the remote server back to your local workstation. This can be very handy in case you, for example, need to map the port from the remote server to a localhost port on your workstation. For example, if the only allowed connection is SSH to the server and port 1521 is running on the server for the database you will not be able to remotely connect to port 1521 directly. You can use a tunnel over port 22 (ssh) and create a tunnel within this session to your local machine so you can connect to localhost:1521 and communicate (via the SSH tunnel) with the database.

The above use is quite straight forward, when using a Linux workstation creating a tunnel is quite straightforward, when using Windows with putty this is also done quite easy with creating a tunnel profile in putty. It gets more interesting when you have the below shown configuration.

In this situation you have a windows laptop which is only able to connect to the “jump server” via SSH. However, when you like to make use of Oracle SQL developer and connect to the database on the database server you will not be able to connect directly on port 1521 or create a direct tunnel between your workstation and port 1521 via a tunnel.

You will need to create a tunnel between your workstation to the “jump server” and from the “jump server” to the database server. This is in essence a double hop tunnel. To arrange this take the following steps:

  • Configure on your windows workstation a putty tunnel where the source will be 45678 and the destination is localhost:45678  (see screenshot below)
  • Connect with this configuration from your workstation to the “jump server”.
  • Execute the following command while on the “jump server” shell: ssh -L 45678:database-server root@database-server
  • While on your workstation connect Oracle SQL Developer to localhost: 45678

This should enable you to use Oracle SQL Developer locally by making use of a dual hop SSH tunnel to the database server via the “jump server”.

Thursday, September 10, 2015

Exadata check IB cables

One of the things that helps make the Exadata perform at the speed it is performing is the fact that the connections between the compute nodes and the storage nodes is based upon Infiniband . In some cases also other, external components, are connected to the Exadata by making use of Infiniband . Infiniband is an intergrated and vital part of Exadata. The below presentation gives a quick introduction into the Infiniband cabling of a full rack Exadata and how you can connect other Oracle Engineered systems to a Exadata Infiniband  fabric.

In normal situations all cables should be present in an Exadata. However, in some cases due to some reason a cable might have been unplugged. As datacenters are often not near the location where engineers are it can be handy to have the ability to check the state of the cables from the commandline without the need to be physically present in the datacenter. The below bash script enables you to check the state of the cables on both the compute as well as the storage servers.

for ib_cable in `ls /sys/class/net | grep ^ib`; do 
  printf "$ib_cable: "; cat /sys/class/net/$ib_cable/carrier; 

The output will tell you if the cable is present per Infiniband interface. A 1 indicates that a cable is found a 0 indicates that no cable is found.

Friday, September 04, 2015

Oracle EM12C querie virtual machines

Oracle Enterprise Manager, partially in combination with oracle VM manager is able to monitor and manage your Oracle VM landscape and the virtual machines that are deployed on this. One of the advantages of Oracle Enterprise Manager is that all the information associated with known targets is that it is stored in a database. This means that with some simple SQL statements you are able to query information, in the below sample code we do a simple query on the Oracle Enterprise Manager database to get information about the virtual machines we have deployed on Oracle Enterprise Manager in combination with the location where they are in the cluster.

This query can be very handy in case you need to make a quick impact analysis and are in need to know in which datacenter, in which pool, on which physical server specific virtual machines are deployed.

   v_ovm_vm.ovm_display_name         AS VM_NAME,
   v_ovm_vm.kernel_ver               AS VM_KERNEL,
   v_ovm_serverpool.ovm_display_name AS VMSERVER_POOL,
   v_ovm_zone.ovm_display_name       AS VMSERVER_ZONE,
   MGMT$VT_VM_SW_CFG v_ovm_vm,
   MGMT$VT_VSP_CONFIG v_ovm_serverpool,
   MGMT$VT_ZONE_CONFIG v_ovm_zone,
   MGMT$VT_VS_SW_CFG v_ovm_server
   v_ovm_vm.vsp_uuid = v_ovm_serverpool.vsp_uuid
   AND v_ovm_serverpool.zone_uuid = v_ovm_zone.zone_uuid
   AND v_ovm_vm.VS_UUID = v_ovm_server.vs_uuid
   ORDER BY 3,4,5,1

The code is also available on github where you can find a larger collection of scripts. This scripting repository will be updated continuously so everyone is able to make use of the scripts.

Monday, August 31, 2015

Oracle Linux local firewalls -- firewalld

A question that comes to me quite often is the question if local firewalls should be used in Linux. Often the question comes from administrators of the operating system who do not “like” to maintain the firewalls all locally and would like to have the network team to take care of this on a network level. This question is also often posed by DBA’s and developers who need to access the systems often and are involved in changes to the systems. Every time they need to have a port open or a new route between machines added they have to go through a change management process in relation to local firewalls and would rather see this is not implemented.

As I do work on Linux (and other operating systems) regularly in a changing role I do sympathies with the statements and do understand the questions and reasons behind this. From one of my roles as a security consultant and architect I do however not agree with the statement that security should be managed by the network team and local firewalls are nothing more than an annoyance.

A recent post on a mailing list around a different subject gave me the opportunity to again come back to my topic of defending the use of local firewalls.

I am particularly interested in confirming that low-risk servers can’t be used as a stepping stone to attack a high-risk server, or as a means of unauthorised data egress.

The above quote is out of context due to sharing restrictions, however, the full mail started a discussion on the topic of local firewalls. Taking the above quote already provides some clues on why local firewalls are important.

If we take the below architecture deployment for a “standard” implementation of an Oracle based landscape. The landscape is using Oracle Linux and hosts Oracle software.

Most implementations are based upon the above principle, they will have a DMZ which hosts the external facing services and those machines will connect to the back-end of the application which in our case is an Oracle RAC database implementation in combination with an Oracle NoSQL Key-Value store.

In principle nothing is wrong with this picture, if all is done correctly the external facing ports will only allow traffic on the ports that are needed and the firewall will block all other traffic. The same will be applicable for the firewall between the DMZ and the back-end systems. However, in case that an attacker is able to gain access to one of the hosts on the DMZ you will have the situation that the external firewall is not protecting you against communication between the compromised host and the other hosts in this specific VLAN. The attacker will be able to communicate extremely more easy between the hosts by making use of the lack of a firewall between those hosts.

The below implementation is showing a more rigid model in which all host have a local firewall, in case one of the hosts in compromised the ability of an attacker is limited to that host only and the options to connect to other hosts is extremely limited.

Due to this the administrators and the security team have much more time to fight back against the attack and the overall security of the landscape is significantly raised.

Many people opt against implementing local firewall rules due to the fact that it introduces a management overhead. In my personal opinion the use of local firewalls should however be promoted as a standard and only being considered to not implement by exception rather than the other way around. Currently you see that the default is to not implement it and only in some exceptional cases customers decide to implement it.

Now, if you implement local firewalls on Linux the most common solution to look for is iptables. However, as from Oracle Linux 7 the default firewall is no longer iptables anymore, it will be using a firewalld firewall instead. Firewalld is using the well known netfilter solution. Netfilter is the packet filtering framework inside the Linux 2.4.x and later kernel series. A netfilter kernel component consisting of a set of tables in memory for the rules that the kernel uses to control network packet filtering.

Oracle states the following about firewalld-based firewalls within the OL7 documentation:

The firewalld-based firewall has the following advantages over an iptables-based firewall:

  1. Unlike the iptables and ip6tables commands, using firewalld-cmd does not restart the firewall and disrupt established TCP connections.
  2. firewalld supports dynamic zones, which allow you to implement different sets of firewall rules for systems such as laptops that can connect to networks with different levels of trust. You are unlikely to use this feature with server systems.
  3. firewalld supports D-Bus for better integration with services that depend on firewall configuration.

Especially the point about dynamic zones is very interesting if you need to maintain a large set of firewalls. You can create zones and apply them for the systems where they are needed, however ship all your installations with it. For example a default installation could contain a set of rules (zones) for a number of situations in your enterprise footprint, you activate them where they are appropriate.

To configure the settings of firewalld you can make use of a GUI by calling firewall-config or you can use the CLI by calling firewall-cmd

Monday, August 17, 2015

Joining a startup

Ever thought about working for a startup, ever decided not to do it because you thought it was finically a gamble and a risk considered to big? You might have done the right thing, if you are not able to take a financial risk and you have a family to support then joining a startup might not be the best choice for you. Specially considering that 90% of the startups fail and a lot of them fail before they make profit or even are able to pay a reasonable amount to the people who work for them.

However, for those who do have the option to take a financial risk and / or have an entrepreneurial way of thinking, for those who love the drive of people working at a startup, the upside can be big. Some startups become good running businesses and some, a very small amount of the remaining 10% will become very big.

Having joined a startup from the beginning and it becomes big, very big, the value per employee can skyrocket. And a large part of those companies do recognise the endless efforts the employees have put into it to make this happen.

As you can see in the above bubble chart, some companies have a very high value per employee. So, yes, working for a startup can be a risk. However, if it becomes a success it can become a very big success with a big financial gain. You should however never join a startup with this as a goal, join a startup for the fun and for the mindset that lives in such a company.