Saturday, February 24, 2018

Oracle JET - use clean JSON array data

Oracle Jet can be used to build modern web based applications. As part of the Oracle Jet project a set of examples and cookbooks have been created by Oracle which can be used to learn and build upon. The examples work in general perfectly good, however, at some places some optimizations can be done.  One of the examples is how data is put into the examples for the graphs. Building on an example for the bubble graph we have the below data set (in the original form):

The above example will work perfectly correct. As you can see it is a JSON like structure. If I run the above in JSONlint it will state it is not a valid JSON structure.

If we change the above into the below, we will have a valid JSON structure. Using a valid JSON structure is a better solution and will make things more easy when you use a REST API to get the data instead of using static data.

As the functionality is the same, the best way is to ensure we use a valid JSON structure. 

Thursday, February 15, 2018

Oracle Linux - Nginx sendfile issues

Nginx is currently a very popular http server and might take over the popular place currently hold by Apache HTTPD. For all good reasons Nginx is becoming very popular as it is blazing fast and is build without the legacy Apache is having. It is build with the modern day requirement set for a http server. One of the things that support the speed of Nginx is the sendfile option.

Sendfile is by default enabled, and if you run a production server that will also provide end users with static files you do want this enabled for sure.

Nginx initial fame came from its awesomeness at sending static files. This has lots to do with the association of sendfile, tcp_nodelay and tcp_nopush in nginx.conf. The sendfile Nginx option enables to use of sendfile for everything related to… sending file.
sendfile allows to transfer data from a file descriptor to another directly in kernel space. sendfile allows to save lots of resources:
sendfile is a syscall, which means execution is done inside the kernel space, hence no costly context switching.
sendfile replaces the combination of both read and write.
here, sendfile allows zero copy, which means writing directly the kernel buffer from the block device memory through DMA.
Even though sendfile is great when you are running your production server you can run into issues when you are doing development. In one of my development setup I am running Oracle Linux 7 in a vagrant box on my Macbook to run an instance on Nginx. I use the default /vagrant mountpoint from within the Oracle Linux vagrant box towards the filesystem of my Macbook as the location to server html files.

Main reason I do like this setup is that I can edit all files directly on my Macbook and have Nginx serve them for testing. The issue being is that Nginx is not always noticing that I changed a html file and keeps serving the old version of the html file instead of the changed version.

As it turns out, the issue is the way sendfile is buffering files and checking for changes on the file. As it is not the Oracle Linux operating system that makes the change to the file but the OS from my Macbook it is not noticed always in the right manner. This is causing a lot of issues while making interactive changes to html code.

Solution for this issue is, disable senfile in Ngnix. You can do so by adding the below line to your Ngnix config file:

sendfile  off;

After this is done, your issue should be resolved and Ngnix should start to provide you updated content instead of buffered content.  

Wednesday, February 14, 2018

Oracle to expand Dutch data centre ‘significantly’

As reported by dutchnews.nl : Oracle plans to expand its Dutch data centre ‘significantly’ to meet demand for its integrated cloud services, the Financieele Dagblad newspaper said on Tuesday.  Financial details were not disclosed but data centres generally cost several hundred million euros, the paper said.

This is Oracle’s second large investment in the Netherlands in a short time. Two years ago the company set up a new sales office in Amsterdam to cover Scandinavia, the Benelux and Germany, marketing Oracle’s cloud services to companies and institutions. The office has a payroll of 450 people, of whom 75% come from abroad.

The Netherlands is popular as a data storage centre. At the end of 2016, Google opened a large new €600m data storage centre in Eemshaven. Microsoft is planning to spend €2bn on a new centre in Wieringermeer while Equinix of the US is to open a new €160m centre in Amsterdam’s  Science Park, near the Oracle facility.

Oracle Scaleup Program to support startups

Whenever you are working on a startup you know you can use every help you can get. Being it "simple" money to keep going, being it mentoring from someone in the industry or a veteran in doing business or getting help in a other ways. Other ways can be people and companies who believe in your startup and support you by providing office space, providing equipment or helping in getting that first customer to sign up with your startup.

If you ever worked at a startup you known that the first periode of your startup is crucial. And even if your product or service is brilliant, getting it from an idea to something that can be delivered is hard. People sometimes have a romantic idea around starting a startup, in reality it is very (very very) hard work where you are happy with every bit of support you can can get.

Oracle is expaning their support to the Startup community and recently announced this (also) via twitter with the below tweet:
Are you part of a #startup? Then, you don't want to miss this news: @Oracle is expanding its global startup ecosystem to reach more #entrepreneurs worldwide. Say hello to the new Virtual Global #Scaleup Program! @OracleSCA http://ora.cl/fO3jS 
In effect the Oracle ScaleUp program supports startups and Oracle announced the expansion of its global startup ecosystem in an effort to increase the impact and support for the wider startup community, reach more entrepreneurs worldwide, and drive cloud adoption and innovation. The expansion includes the launch of a new virtual-style, non-residential global program, named Oracle Scaleup Ecosystem, as well as the addition of Austin to the residential Oracle Startup Cloud Accelerator program. The addition of Austin brings the residential program to North America and expands the accelerator’s reach to nine total global locations.

If you are a startup, getting the help from Oracle might be vital in becoming a success story. And as an entrepreneur you will recognize the fact that all support is welcome in the vital startup stage of your startup. Reaching out to Oracle and the Scaleup program might be a good idea. 

Oracle PaaS Middleware update 36 February 2018

The latest update on Oracle PaaS middleware from February 2018 is available online. Thanks to Juergen Kress and the Oracle Paas Community.

The latest update is shared in the form of a YouTube video blog as usual and should be of interest to you if you are involved or working with Oracle PaaS components and/or Oracle middleware based solutions in general.

Monday, February 05, 2018

Oracle JET - Interactive Force Directed Layout

When developing a user interface for modern enterprise applications one of the challenging tasks is to unlock the data within the enterprise and provide it in a usable manner to the end users. Commonly enterprises have vast amounts of data which are so compartmented users are unable to access them and in some cases the data is presented in ways that it makes it hard for users to grasp the bigger picture.

When developing modern enterprise applications one of the tasks that should be undertaken is ensuring data is not only accessible however that the data is also presented in a way that it makes sense to users. unlocking data in ways that it is easy to understand and that it is easy and intuitive to work with.

When you develop you enterprise application UI based upon Oracle JET you can extend Oracle JET with all different kind of javascript and HTML5 based options. In the below example we showcase an Interactive Force Directed Layout to represent relations between datapoints in an interactive manner.



The above is based upon GOjs in combination with Oracle JET. GoJS is a feature-rich JavaScript library for implementing custom interactive diagrams and complex visualizations across modern web browsers and platforms. GoJS makes constructing JavaScript diagrams of complex nodes, links, and groups easy with customizable templates and layouts.


As you can see from the above screenshot the demo application is build based upon a demo version of GOjs. When you like to use GOjs in a commercial manner you will have to purchase a license. However, a version is available to test the functionality.

To enable the above you will have to download go.js from github and include it into the Oracle JET structure. In our case I included the go.js library under js/libs/custom .

To get the above example running the below code can be used as part of the Oracle JET html code:

To ensure the script is loaded you will have to ensure the body tag contains an onload="init()" statement and you will have to ensure that you have a div element with id="myDiagramDiv" in your page.

if we look at the script used in the example above you will notice that the data points are hard coded into the script. In a real world situation this would not be preferable and you would use a method to load the data from a REST API. 

Sunday, February 04, 2018

Oracle JET - Basic control with appController.js

When you start to develop your first application with Oracle JET the first thing you would like to understand is some of the basic ways on how to control parts of your application and influence some basic things provided by the template. The first place you can start looking is appController.js which is located at js/appController.js . The appController javascript is the simple control script which can be used to influence most part of the application.

What is provided in the base template (shown below) is a start for you to develop your own code. As you can see we have some hard values which are used in the examples. In a real world example it would not make any sense to have any (or most) of these in a hardcoded manner. You would rather fetch them from a REST API source.

If we look for example at line 19 in the above example;
self.appName = ko.observable("My Demo Application");

This is used as a variable which is referred to in html pages within the Oracle JET application. This specific example if for example used to show the name of the application, you can see this in the screenshot below. The same is the case for the email address which is controlled in the appController.js script self.userLogin


If you want to understand how to call and include the values from appController.js you can refer to the basic examples provided by Oracle JET. You will see that there are data bindings included in the HTML code like for example the below:

The data-bind="text: appName" part will ensure that the value for appName from appController.js is used in this case.

As stated, there is no real good use for this model in real world applications, or at least the number of cases is limited. In reality you will fetch most data from a REST API in most cases. However, when starting with Oracle JET is a good to play around a bit with it to ensure you understand the basics and how to apply them as it will help you a lot when you develop your own custom code and extensions for appController.js in a later stage. 

Thursday, January 25, 2018

Oracle Weblogic - repair URL format in management REST messages

Whenever using the REST management API for Oracle Weblogic and you start to look into the JSON response files provided by Oracle Weblogic you will find that some (all) of the URL's in the JSON response are being constructed in a manner that slashes have been escaped in a generic manner. This is done, most likely, to ensure that nothing breaks when it is called by some other application. However, if you expect clean URL's this is a bit frustrating.

The example below showcases a part of the original JSON message as it it presented by the Weblogic REST API.

[vagrant@docker ~]$ curl -s --user weblogic -H "Accept: application/json" http://192.168.56.50:7001/management/weblogic
Enter host password for user 'weblogic':
{
    "links": [
        {
            "rel": "self",
            "href": "http:\/\/192.168.56.50:7001\/management\/weblogic"
        },
        {
            "rel": "canonical",
            "href": "http:\/\/192.168.56.50:7001\/management\/weblogic"
        },
        {
            "rel": "current",
            "href": "http:\/\/192.168.56.50:7001\/management\/weblogic\/12.2.1.3.0"
        }
    ],
    "items": [


If you are using Oracle Linux and if you are using curl to get the content of the file you can pipe the result into sed to ensure you remove the unwanted slashes from the URL's in the reponse.

For this you pipe the result into the below sed command to get the expected output:

| sed 's|[\,]||g'

This will give you the below clean output as an example:
[vagrant@docker ~]$ curl -s --user weblogic -H "Accept: application/json" http://192.168.56.50:7001/management/weblogic | sed 's|[\,]||g'
Enter host password for user 'weblogic':
{
    "links": [
        {
            "rel": "self"
            "href": "http://192.168.56.50:7001/management/weblogic"
        }
        {
            "rel": "canonical"
            "href": "http://192.168.56.50:7001/management/weblogic"
        }
        {
            "rel": "current"
            "href": "http://192.168.56.50:7001/management/weblogic/12.2.1.3.0"
        }
    ]
    "items": [

As you can see this is a much more clean way of looking at the response and much more easy to understand and handle from a coding point of view.

Oracle Weblogic - RESTful Management Services

When using Oracle Weblogic there are a couple of ways on how you can manage and monitor your deployment. In the most basic form a Weblogic deployment comes with a management service providing you a web console to do most monitoring and management tasks. Additionally you can do everything from the Oracle Linux command line and you will have the option to hook things up to an Oracle Enterprise manager setup to do a more consolidated way of management and monitoring.

Even though all the options mentioned above are good and have their pros and cons, another option is available which, for some reason, is not that well known. You can also use the RESTful Management Services with Oracle Weblogic 12C.

In some versions the REST API is enabled and in some not. To ensure that it is enabled you will have to check the general configuration for your domain. Under the advanced section you will find the "enable RESTfull Management Services" which you will have to ensure is activated to be able to make use of it.


The REST API services can now be accessed via http://{IP}:{port}/management/ . Having the option to administer and monitor your Weblogic instance via a REST API will increase the options on including it in a more automated fashion a lot more.

Whenever you are looking into automation and custom building some management solutions for IT footprints that include Weblogic the REST API's are a must look into area. 

Wednesday, January 24, 2018

Oracle Linux - installing WebLogic font error

When installing Oracle WebLogic server on Oracle Linux 7 one would expect an easy installation without any issues along the way. However, for some reason you might run into a strange error which is not that clear in the beginning on what the root cause might be. Installing the combination of Weblogic Server 12.2.1.3 on A Oracle Linux 4.1.12-61.1.28.el7uek.x86_64 machine in combiantion with Java 9.0.4 caused the below strange issue as soon as the universal installer was about to start:



After some research it became clear that the issue is being caused by something as simple as missing fonts. In case you do encounter this issue the solution is to create a file /etc/fonts/local.conf and ensure the below content is in the file.


This should ensure your installer is working fine. 

Monday, January 15, 2018

Oracle Linux - convert XML to JSON

for people who are not that big a fan of XML files and still need to use XML structures every now and than when they get data from another system the xml2json solution might be a good thing to look into. The xml2json util is an opensource utility written in Python which will convert XML data into JSON data, as well as the other way arround. Using this on the Oracle Linux commandline when you code bash scripting can save a lot of time. Especially if you use xml2json in combination with jq you can save a lot of time in most cases.

As an example, we take an example XML file we found on the Microsoft website. And for convenience I create a gist from it so you can more easily grab it from the command line. The XML contains a list of books and the details of the books.

If we now would like to do something with it in a JSON format we can can use the xml2sjon command as shown below to get a valid JSON structure we can work with.

./xml2json.py -t xml2json -o test1.json test1.xml --strip_text

As we have a JSON strcuture we can also use jq on it. You can find jq also on the Oracle YUM repository for your use. the xml2json utility is however not included in the Oracle YUM repository at this moment so you will have to grab it from github where you will find the xml2json.py file which you need to ensure is on your Oracle Linux system to be able to use it.

Saturday, January 06, 2018

Oracle Linux - Build Raspberry Pi SD card on a mac

Some time ago the people from the Oracle Linux team have taken the time to build a Oracle Linux 7 version for the Raspberry pi. The Raspberry Pi is a series of small single-board computers developed in the United Kingdom by the Raspberry Pi Foundation to promote the teaching of basic computer science in schools and in developing countries. The original model became far more popular than anticipated, selling outside its target market for uses such as robotics. It does not include peripherals (such as keyboards, mice and cases).
The operating system you use will have to be placed on a single Micro SD card. Using a mac the below command was useful to place the downloaded Oracle Linux 7 distribution for the Raspberry Pi on the Micro SD card:

sudo dd bs=1m if=/Users/louwersj/Downloads/rpi3-ol7.3-image.img of=/dev/disk2 conv=sync

If you face the issue of the below error, you most likely have mounted the SD card to your operating system. You will have to unmount it (via the disk utility app) and retry the command. Do note this could take some time to complete.

dd: /dev/disk2: Resource busy

A bit of care is needed when executing the command. If your Micro SD card is NOT mounted on /dev/disk2 you might run into the issue that you damage an existing other disk. Meaning, you need to check if /dev/disk2 is indeed the SD card in your case. Using a Mac you can use the below command to check your disks:

diskutil list

When your dd command is finished and you place the SD card in your Raspberry Pi and start it you should end up with a running Oracle Linux 7 operating system on your Raspberry Pi

Thursday, January 04, 2018

Oracle Dev – making Docker containers dynamic

When adopting Docker containers as a way to deploy applications and functionality in your landscape one of the tricks is to make sure you do not have to many different types of containers. Even though it is relatively simple to build a container and bake all configuration into it this might cause issues at a later stage. All container images need to be maintained and have some level of lifecycle management applied to them.

For this reason, it is better to limit the number of containers and ensure you can dynamically control the behaviour. This might cause a bit more work when developing the container image and will need to have some thinking about how to dynamically configure a container based upon a limited set of input parameters.

The reason you want to limit the number of parameters you provide a container during startup is that it is relative inflexible and will cause an additional burden on the teams responsible for ensuring the containers are always up and running in the right manner.

A better way is to store configuration data externally and have the container collect the configuration data based upon one (or max two) parameters. This can be for example the role of a container or the environment name where it will be deployed.

As an example, if we build a container for Oracle ORDS which will connect on one side to an Oracle database and on the other side provide a REST API interface to consumers you can make use of this principle. You do want to build a ORDS Docker container only once and would like to inform the container with a parameter which configuration details it needs to gather to start in the right role and manner within the landscape.


We have been building Oracle ORDS containers based upon Oracle Linux 7 Slim, as part of the script that starts ORDS in the container we have included parts that will ensure it will connect to a Consul Key Value store and collect the needed data to dynamically build the configuration files that are used by ORDS to start.

As an example, our ORDS container already knows it is an ORDS container, at startup we provide the input parameter that informs the container that the application name is “app0”. Based upon this knowledge the scripting will be able to fetch the needed configuration data from the Consul REST API.

In Consul we have provided all the needed KV pairs for the script to write the ORDS configuration. If we for example want to have the database hostname where an ORDS “app0” container needs to connect to we execute the below curl command to fetch the data:

curl -s http://192.168.56.52:8500/v1/kv/ords/app0/params/db.hostname | jq '.[].Value' | tr -d '"' | base64 –decode

The reason we use jq in the above example is to help with fetching "Value" from the JSON message we get returned by Consul. If we look at the full JSON message we get returned we see we get the below returned:

[
  {
    "LockIndex": 0,
    "Key": "ords/app0/params/db.hostname",
    "Flags": 0,
    "Value": "MTkyLjE2OC41Ni4xMDE=",
    "CreateIndex": 8026,
    "ModifyIndex": 8047
  }
]

In our case we are only interested in "value", by adding jq '.[].Value' to the command we will only get the content of "Value" returned. However, value is between double quotes and is a base64 encoded string. If we want to decode the base64 encoded string we need to feed this into the base64 command which will not work if we still have it wrapped in the double quotes. This is the reason we pipe the result first into the tr command to strip off the double quotes. The result is a clear base64 encoded string which we can decode using the base64 --decode command.

Placing Consul in the center
By placing Consul in the center of your deployment you will add a KV store which can be used in combination with your Docker containers to play a vital role in how containers are started. By ensuring you have a way of controlling how Consul is positioned per environment (think development, test, acceptance & production like environments) you can influence extremely well how containers are behaving in relation to the environment they are running in.

Building the right and useable structure in the Consul KV store is vital for building the level of flexibility you need in a dynamic landscape.


Spending the right amount of time in thinking about the KV structure which can be used in every environment and thinking about how containers will fetch data from a central Consul KV store in your landscape will require some time and effort, however, will provide you with a lot of benefits in the longer run when operating a wider container based footprint. 


Sunday, December 31, 2017

Oracle Linux - start Apache Kafka as service

In a previous post I showcased how to run Apache Kafka on Oracle Linux. This was intended as an example for testing purposes, the downside of this example was that you needed to start Zookeeper and Kafka manual. Adding it to the startup scripting of your Oracle Linux system makes sense. Not only in a test environment, also in a production environment (especially) you want to have Kafka started when you boot your machine. The below code snippet can be used for this.

You have to remember that, after you place this in /etc/init.d, you have to use chkconfig to add it as a service and you have to use the service command to start it for testing. 

Saturday, December 30, 2017

Oracle Linux - Install Apache Kafka

Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. Its storage layer is essentially a "massively scalable pub/sub message queue architected as a distributed transaction log," making it highly valuable for enterprise infrastructures to process streaming data. Additionally, Kafka connects to external systems (for data import/export) via Kafka Connect and provides Kafka Streams, a Java stream processing library.



In this blogpost we will install Apache Kafka on Oracle Linux, the installation will be done in a test setup which is not ready for production environments however can very well be used to explore Apache Kafka running on Oracle Linux.

Apache Kafka is also provided as a service from the Oracle Cloud in the form of the Oracle Cloud Event Hub. This provides you a running Kafka installation that can be used directly from the start. The below video shows the highlights of this service in the Oracle Cloud.


In this example we will not use the Event Hub service from the Oracle Cloud, we will install Kafka from the ground up. This can be done on a local Oracle Linux installation or it can be done on a Oracle Linux installation in the Oracle Cloud, making use of the Oracle IaaS components in the Oracle Cloud.

Prepare the system for installation.
In esscence, the most important step you need to undertake is to ensure you have Java installed on your machine. The below steps outline how this should be done on Oracle Linux.

You can install the Java OpenJDK using YUM and the standard Oracle Linux repositories.

yum install java-1.8.0-openjdk.x86_64

You should now be able to verify that Java is installed in the manner shown below as an example.

[root@localhost /]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
[root@localhost /]#

This however is not making sure you have set the JAVA_HOME and JRE_HOME as environment variables. To make sure you will have the following two lines in /etc/profile.

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
export JRE_HOME=/usr/lib/jvm/jre

After you made the changes to this file you reload the profile by issuing a source /etc/profile command. This will ensure that the JRE_HOME and JAVA_HOME environment variables are loaded in the correct manner.

[root@localhost ~]# source /etc/profile
[root@localhost ~]#
[root@localhost ~]# env | grep jre
JRE_HOME=/usr/lib/jvm/jre
JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
[root@localhost ~]#

Downloading Kafka for installation
Before following the below instructions, it is good practice to check what the latest version is and download the latest stable Apache Kafka version. In our case we download the file kafka_2.11-1.0.0.tgz for the version we want to install in our example installation.

[root@localhost /]# cd /tmp
[root@localhost tmp]# wget http://www-us.apache.org/dist/kafka/1.0.0/kafka_2.11-1.0.0.tgz
--2017-12-27 13:35:51--  http://www-us.apache.org/dist/kafka/1.0.0/kafka_2.11-1.0.0.tgz
Resolving www-us.apache.org (www-us.apache.org)... 140.211.11.105
Connecting to www-us.apache.org (www-us.apache.org)|140.211.11.105|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 49475271 (47M) [application/x-gzip]
Saving to: ‘kafka_2.11-1.0.0.tgz’

100%[======================================>] 49,475,271  2.89MB/s   in 16s 

2017-12-27 13:36:09 (3.01 MB/s) - ‘kafka_2.11-1.0.0.tgz’ saved [49475271/49475271]

[root@localhost tmp]# ls -la *.tgz
-rw-r--r--. 1 root root 49475271 Nov  1 05:39 kafka_2.11-1.0.0.tgz
[root@localhost tmp]#

You can untar the downloaded file with a tar -xvf kafka_2.11-1.0.0.tgz and than move it to the location where you want to place Apache Kafka. In our case we want to place kafka in /opt/kafka so we undertake the below actions:

[root@localhost tmp]# mkdir /opt/kafka
[root@localhost tmp]#
[root@localhost tmp]# cd /tmp/kafka_2.11-1.0.0
[root@localhost kafka_2.11-1.0.0]# cp -r * /opt/kafka
[root@localhost kafka_2.11-1.0.0]# ls -la /opt/kafka/
total 48
drwxr-xr-x. 6 root root    83 Dec 27 13:39 .
drwxr-xr-x. 4 root root    50 Dec 27 13:39 ..
drwxr-xr-x. 3 root root  4096 Dec 27 13:39 bin
drwxr-xr-x. 2 root root  4096 Dec 27 13:39 config
drwxr-xr-x. 2 root root  4096 Dec 27 13:39 libs
-rw-r--r--. 1 root root 28824 Dec 27 13:39 LICENSE
-rw-r--r--. 1 root root   336 Dec 27 13:39 NOTICE
drwxr-xr-x. 2 root root    43 Dec 27 13:39 site-docs
[root@localhost kafka_2.11-1.0.0]#

Start Apache Kafka
The above steps should have placed Apache Kafka on your Oracle Linux system, now we will have to start it and test it for its working. Before we can start Kafka on our Oracle Linux system we first have to ensure we have ZooKeeper up and running. To do so, execute the below command in the /opt/kafka directory.

bin/zookeeper-server-start.sh -daemon config/zookeeper.properties

Depending on the sizing your machine you might want to change some things to the startup script for Apache Kafka. When, as in my case, you deploy Apache Kafka in an Oracle Linux test machine you might not have as much memory allocated to the test machine as you might have on a "real" server. The below line is present in the bin/kafka-server-start.sh file which sets the memory heap size that should be used.

    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

in our case we changed the heap size to 128 MB which is more than adequate for testing purposes however migth be way to less when trying to deploy a production system. The below is an example of the setting as used for this test:

    export KAFKA_HEAP_OPTS="-Xmx1G -Xms128M"


This should enable you to start Apache Kafka for the first time as a test on Oracle Linux. You can start Apache Kafka from the /opt/kafka directory using the below command:

bin/kafka-server-start.sh config/server.properties

You should be able to see a trail of messages from the startup routine and, if all is gone right the last message should be the one shown below. This shuuld be an indication that Kafka is up and running.

INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

Testing Kafka
As we now have Apache Kafka up and running we could (should) test if Apache Kafka is working as expected. Kafka comes with a number of scripts that will make testing more easy. The below scrips come in use when starting to test (or debug) Apache Kafka;

bin/kafka-topics.sh
For taking actions on topics, for example creating a new topic

bin/kafka-console-producer.sh
Used for the role as producer of event messages

bin/kafka-console-consumer.sh
for the role as consumer for receiving event messages.

The first step in testing is to ensure we have a topic in Apache kafka to publish event messsage towards. For the we can use the kafka-topics.sh script. We will create the topic "test" as shown below;

[vagrant@localhost kafka]$
[vagrant@localhost kafka]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".
[vagrant@localhost kafka]$

To ensure the topic is really available in Apache Kafka we list the available topics with the below command:

[vagrant@localhost kafka]$ bin/kafka-topics.sh --list --zookeeper localhost:2181
test
[vagrant@localhost kafka]$

having a topic in Apache Kafka should enable you to start producing messages as a producer. The below example showcases starting the kafka-console-producer.sh script. This will give you an interactive commandline where you can type messages.

[vagrant@localhost kafka]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
hello 1
hello 2
thisisatest
this is a test
{test 123}

[vagrant@localhost kafka]$

As the whole idea is to produce messages on one side and recieve them at the other side we will also have to test the subscriber side of Apache Kafka to see if we can receive the messages on the topic "test" as a subscriber. The below command subscribes to the topic "test" and the --from-beginning options indicates we want to recieve not only event messages that are created as from this moment, we want to receive all messages from the beginning (creation) of the topic (as far as they are available in Apache Kafka).

[vagrant@localhost kafka]$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
hello 1
hello 2
thisisatest
this is a test
{test 123}

As you can see the event messages we created as the producer are received by the consumer (subscriber) on the test topic.