Saturday, May 26, 2018

Oracle AI cloud - develop local Pillow applications

Oracle AI Cloud provides by default a solution for developers who want to develop and deploy applications that make use of Pillow. Pillow is the friendly PIL fork by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors. Even though developing your code in the Oracle cloud makes sense in some cases, however, developing on your local workstation makes a lot more sense from time to time.

To start developing Pillow based applications the most easy way is to install pillow in a local Oracle Linux vagrant box. Setting up a local Oracle Vagrant box is relative straightforward and already discussed a couple of times in this blog.

Installing Pillow
Installing Pillow can be done by using pip, you can use the below example on how you need to install Python Pillow on Oracle Linux using pip.

[root@localhost site-packages]# pip install Pillow
Collecting Pillow
  Downloading (2.0MB)
    100% |████████████████████████████████| 2.0MB 190kB/s 
Installing collected packages: Pillow
Successfully installed Pillow-5.1.0
[root@localhost site-packages]#

As you can see, pip will take care of installing Pillow and gives you a ready to environment to start developing. This will help you to develop locally on projects you can deploy later on the Oracle AI Cloud. 

Saturday, May 05, 2018

Oracle Linux - register GitLab Runner

Automation of the development process and including CI/CD processes in your development and deployment cycle is more and more common. One of the solutions you could use is GitLab CI to build automated pipelines. For companies who want (need) to maintain a private repository and cannot use, as an example, for storing their source code GitLab is a very good tool. As part of GitLab you can use GitLab Ci for pipeline automation.

Using GitLab CI and the Gitlab runners takes away (for a part) the need to include tooling such as Jenkins in your landscape. You can instruct your Gitlab Runners to execute certain task and control run the pipeline. For this to work you need to install the runner and register it against your GitLab repository.

In our case we run the GitLab repository on an Oracle Linux 7 instance and we also have the GitLab runner installed on a (seperate) Oracle Linux 7 instance. After installation you will have to take the below steps to register your runner against the GitLab repository. This is done on the GitLab runner instance.

[root@gitlab ~]# gitlab-runner register
Running in system-mode.                            
Please enter the gitlab-ci coordinator URL (e.g.
Please enter the gitlab-ci token for this runner:
Please enter the gitlab-ci description for this runner:
[]: runner_0  
Please enter the gitlab-ci tags for this runner (comma separated):
Whether to run untagged builds [true/false]:
[false]: true
Whether to lock the Runner to current project [true/false]:
[true]: true
Registering runner... succeeded                     runner=5gosbE5T                
Please enter the executor: docker, parallels, shell, virtualbox, kubernetes, docker-ssh, ssh, docker+machine, docker-ssh+machine:
[docker, parallels, shell, virtualbox, kubernetes, docker-ssh, ssh, docker+machine, docker-ssh+machine]: shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! 
[root@gitlab ~]# 

The token you need to provide you can get from the GitLab repository. The below image shows the token and the location where you can obtain the code.

The same page can be used to change settings to your runners after they are deployed. 

Saturday, April 07, 2018

Oracle Cloud - Using AI cloud Platform to find a parking spot

One of the new and upcoming parts of the Oracle cloud is the Oracle AI Cloud platform. In effect this is a bundle of pre-installed frameworks and libraries who are tuned to run on the Oracle cloud infrastructure. One of the deployments in the Oracle AI Cloud Platform is OpenCV. When you are working with incoming visual data this might be of much interest to you. 

OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision. Officially launched in 1999, the OpenCV project was initially an Intel Research initiative to advance CPU-intensive applications, part of a series of projects including real-time ray tracing and 3D display walls.

The below image showcases the full Oracle AI Cloud platform:

Example usecase
as an example usecase for using OpenCV from the Oracle AI Cloud Platform we like to outline a theoretical case where on a regular base pictures of a "old fashion" parking space at an airport are being uploaded to OpenCV. based upon the images that are being send to OpenCV on the Oracle AI Cloud Platform the system can detect on which part of the parking area most open spots are and direct visitors to this area.

Even though most parking spaces have a counter of how many cars are currently on the parking lot, when this is done for a large space it can still be hard to find the area where there are free spots. As you already would need some level of camera security for this area the costs for adding this feature are much lower compared to installing sensors in the ground who could detect where a car is parked or not.

Even though it might sound complex, detecting free parking space is a relative easy task to conduct with OpenCV and a large number of examples and algorithms are available. With relative ease you would be able to create a solution like this on the Oracle Cloud and by doing so improve the satisfaction of customers without the need to add sensors in every possible parking location. 

Tuesday, March 20, 2018

Oracle Linux - Local Vault token cache

Vault is more and more seen in modern day infrastructure deployments. HashiCorp Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault handles leasing, key revocation, key rolling, and auditing. Through a unified API, users can access an encrypted Key/Value store and network encryption-as-a-service, or generate AWS IAM/STS credentials, SQL/NoSQL databases, X.509 certificates, SSH credentials, and more.

When using Vault from Hashicorp on your Oracle Linux infrastructure you might have noticed that there is no logout option. You can authenticate yourself against vault and from that moment on you can request all information from Vault that you need (and entitled to see). When starting with Vault and building your scripting you might wonder how you "break" the connection again.

In effect, the connection is build every time you do a request against vault and the authentication with a token is done based upon a local cache of the token. If you want to ensure that after you executed the steps needed against vault all tokens are removed you will have to remove the token which is placed in a local cache.

In the case of vault the local cache is a clear text file stored in your home directory as shown below:

[root@docker tmp]# ls -la ~/.vault-token 
-rw------- 1 root root 36 Mar 19 15:49 /root/.vault-token
[root@docker tmp]# 

Even though some improvement requests have been raised to add a logout like function to the Vault CLI the response from the developers from HashiCorp has been that they are not intending to build this into the CLI due to the fact that removing the .vault-token file has the same effect.

In effect the developers from Vault are correct in this and it has the same effect even though it might be a more understandable way of doing things with an option in the CLI. A reminder for everyone who is using Vault, if you are done, ensure that you remove the .vault-token cache file so you are sure nobody will be able to abuse the key to gain access to information they are not entitled to see. 

Sunday, March 18, 2018

Oracle MySQL - test MySQL with Docker

First things first, I am totally against running any type of Docker container that will hold persistent data in any way or form. Even to the point that I like to make the statement that mounting external storage to a container which will hold the persistent data is a bad thing. Some people will disagree with me, however, in the current state of Docker I am against it. Docker should run stateless services and should in no way be depending on persistent data which is directly available (in any form) in the container itself. Having stated this, this post is about running databases in a container, while databases are one of the best examples of persistent storage.

The only exception I make on the statement of not having persistent storage in a container is volatile testing environments. If you have a testing environment you intend to only use for a couple of hours, using a container to serve a database is not a bad thing at all. What you need to remember, if your container stops, all your data is gone.

Getting started with MySQL in Docker
To get started with MySQL in a Docker container you first have to pull it from the Docker registry. You can pull the official container image from Docker as shown in the example below which is done on Oracle Linux:

[root@docker ~]# docker pull mysql
Using default tag: latest
latest: Pulling from library/mysql
2a72cbf407d6: Pull complete 
38680a9b47a8: Pull complete 
4c732aa0eb1b: Pull complete 
c5317a34eddd: Pull complete 
f92be680366c: Pull complete 
e8ecd8bec5ab: Pull complete 
2a650284a6a8: Pull complete 
5b5108d08c6d: Pull complete 
beaff1261757: Pull complete 
c1a55c6375b5: Pull complete 
8181cde51c65: Pull complete 
Digest: sha256:691c55aabb3c4e3b89b953dd2f022f7ea845e5443954767d321d5f5fa394e28c
Status: Downloaded newer image for mysql:latest
[root@docker ~]# 

Now, this should give you the latest evrsion of the MySQL container image. You can check this witht  the docker images command as shown below:

[root@docker ~]# docker images | grep mysql
mysql        latest       5195076672a7        4 days ago          371MB
[root@docker ~]#

Start MySQL in Docker
To start MySQL you can use the below command as an example. As you can see this is a somewhat more extended command than you might see on the Docker page for MySQL.

docker run --name testmysql -e MYSQL_ROOT_PASSWORD=verysecret -p 3306:3306 --rm -d mysql

What I have added in the above example is that I map the internal port 3306 to an external port 3306. If you run multiple instances of MySQL you will need to change the external port numbers. I also added --rm to ensure the docker image is not persisted in any way or form as soon as you stop it.

After starting the container you should be able to find it with a docker ps command:

[root@docker ~]# docker ps |grep mysql
5d8f8bac45a1        mysql        "docker-entrypoint..."   8 minutes ago     Up 8 minutes>3306/tcp  testmysql
[root@docker ~]# 

Use databases in Docker?
As already stated, and actually the reason I wrote this post, you should not run anything in a container where you will need to have persistent storage available within the container itself. Databases are a good example of this. Based upon that statement you should not run a database in a container. Having stated that, if you can live with the fact you might lose all your data (for example in a quick test setup) there is nothing against running a database in a container.

Just make sure you don't do it with your production data (please....).

Sunday, March 11, 2018

Oracle Linux - keep an eye on share libs

When running large clusters of Linux servers, you tend to start to look at different things. When running a large number of Linux servers all dedicated to the same task or taskset you might become interested to find out which shared libraries are used on all systems and as a second question, which nodes do every now and than use shared libraries not used by the majority of the nodes. The question, why is a specific node using a shared library that is not used by any other node is a second question, monitoring and detecting is the first part.

You can use a outlier detection on a large dataset containing a time serie of libraries used by systems. For example, if you would be able to capture the data and store this in elastic you could use Kibana and machine learning to do trend analysis and outlier detection to find out if a specific Linux machine in our "farm" is using a specific library that is not in line with all the other machines.

Capturing could be done, as an example, by executing the below example command:

[root@localhost tmp]# awk '/\.so/{print $6}' /proc/*/maps | sort -u
[root@localhost tmp]# 

If you have a process taking this snapshot of shared library use on a semi-regular interval you will get a good insight in the use of shared libraries in general on your server farm. Having this in place and adding machine learning and outlier detection you can have a system identify strange behaviour on one or more nodes. Additionally, it might help you to improve the base image of your operating system deployed by identifying shared libraries that could potentially be removed or might be in need of an upgrade. 

Oracle Linux - check shared library version

As we have seen in a previous post, you can quickly see which shared library files are used by an executable under Oracle Linux by a specific executable. In our example we found that and are both used by ping under Oracle Linux. We extracted that information by using the elfread command. Even though this is providing some information it is not telling you the exact version that is used at present. In case you do need to know which exact version is used you will have to dig a bit deeper into the system.

In a previous example we used the below command on Oracle Linux to find out the shared library files for the ping command.

[root@localhost tmp]# readelf -d /bin/ping | grep 'NEEDED'
 0x0000000000000001 (NEEDED)             Shared library: []
 0x0000000000000001 (NEEDED)             Shared library: []
[root@localhost tmp]# 

If you like to have a bit more information on the version that is used (for example you can use ldconfig. ldconfig creates the necessary links and cache to the most recent shared libraries found in the directories specified on the command line, in the file /etc/, and in the trusted directories, /lib and /usr/lib (on some 64-bit architectures such as x86-64, lib and /usr/lib are the trusted directories for 32-bit libraries, while /lib64 and /usr/lib64 are used for 64-bit libraries).

This means that ldconfig can provide a lot of information than we might want. In the below example (only first couple of lines) we do a verbose output of ldconfig.

[root@localhost tmp]# ldconfig -v
ldconfig: /etc/ duplicate hwcap 1 nosegneg
/lib64: -> -> -> -> -> -> -> -> -> -> -> -> -> ->

Now, if we want to have some information on we can do so by adding grep to the command via a pipe to only have the information we want.

[root@localhost tmp]# ldconfig -v | grep
ldconfig: /etc/ duplicate hwcap 1 nosegneg ->
[root@localhost tmp]# 

As you can see from the above output, is linked to which is the actual version used (via linking) by the ping executable we used as an example.

Oracle Linux - find shared libraries using readelf

A library is a file containing compiled code from various object files stuffed into a single file. It may contain a group of functions that are used in a particular context. For example, the ‘pthread’ library is used when thread related functions are to be used in the program. Shared Libraries are the libraries that can be linked to any program at run-time. They provide a means to use code that can be loaded anywhere in the memory. Once loaded, the shared library code can be used by any number of programs. So, this way the size of programs(using shared library) and the memory footprint can be kept low as a lot of code is kept common in form of a shared library.

In some cases you want to understand which shared libraries are used by a specific executable file. We take as an example the ping executable as it is available on most systems. To give the complete picture, we are running Oracle Linux and use the below version of ping:

[root@localhost tmp]# uname -a
Linux localhost 4.1.12-61.1.28.el6uek.x86_64 #2 SMP Thu Feb 23 20:03:53 PST 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost tmp]# ping -V
ping utility, iputils-sss20071127
[root@localhost tmp]# 

Now, a number of options are available to find which shared libraries are available. In this example we use the readelf way of doing things. The readelf command displays information about one or more ELF format object files. The options control what particular information to display.

In the below example we use readelf on Oracle Linux to find out which shared library files are used by the ping command as an example

[root@localhost tmp]# readelf -d /bin/ping

Dynamic section at offset 0x8760 contains 22 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: []
 0x0000000000000001 (NEEDED)             Shared library: []
 0x000000000000000c (INIT)               0x1d68
 0x000000000000000d (FINI)               0x6d28
 0x000000006ffffef5 (GNU_HASH)           0x260
 0x0000000000000005 (STRTAB)             0xdc0
 0x0000000000000006 (SYMTAB)             0x3d0
 0x000000000000000a (STRSZ)              1074 (bytes)
 0x000000000000000b (SYMENT)             24 (bytes)
 0x0000000000000015 (DEBUG)              0x0
 0x0000000000000003 (PLTGOT)             0x208a78
 0x0000000000000002 (PLTRELSZ)           1320 (bytes)
 0x0000000000000014 (PLTREL)             RELA
 0x0000000000000017 (JMPREL)             0x1840
 0x0000000000000007 (RELA)               0x1348
 0x0000000000000008 (RELASZ)             1272 (bytes)
 0x0000000000000009 (RELAENT)            24 (bytes)
 0x000000006ffffffe (VERNEED)            0x12c8
 0x000000006fffffff (VERNEEDNUM)         2
 0x000000006ffffff0 (VERSYM)             0x11f2
 0x000000006ffffff9 (RELACOUNT)          44
 0x0000000000000000 (NULL)               0x0
[root@localhost tmp]# 

As you can see, we have two shared library files; in this case. As you can see the readelf command gave a lot more information as well, in case you do not want to have those lines you can use a simple pipe to some commands to ensure you have a more clean output.

[root@localhost tmp]# readelf -d /bin/ping | grep 'NEEDED'
 0x0000000000000001 (NEEDED)             Shared library: []
 0x0000000000000001 (NEEDED)             Shared library: []
[root@localhost tmp]# 

The above example showcases the more clean version to check which shared library files are used.

Wednesday, February 28, 2018

Weblogic - log file monitoring with Elastic

When developing a central logging solution, with for example Elastic, and you are applying this on a WebLogic centric environment you will have to understand how the WebLogic logging is being done in a cluster configuration. If done in the wrong manner you will flood your Elastic solution with a large number of double records.

In effect, each Weblogic server will generate logging, this logging is written to a local file on the operating system where the Weblogic server is running. Additionally the log file is being send by the Weblogic server logger process to the Domain Log Broadcaster. All messages with the exception of the messages marked as debug will be send through a filter to the domain logger on the Weblogic admin node.

The Domain Logger on the admin server takes the log records from all nodes in the cluster and the logging from the admin server and push this into a single consolidated domain log file (after filtering). Additionally the local log files are also written to a local server log file.

When using FileBeat from Elastic you will have to indicate which files need to be watched by FileBeat. In general there are a couple of options.

  1. Put the domain log file under watch of Elastic FileBeat and have all the records being written to this file send to Elasticsearch.
  2. Put all the individual Server Log Files under watch of Elastic FileBeat and have all the records being written to this file send to Elasticsearch.

When applying option 1 you run into the issue of missing log records as filtering is (can) be applied in the chain of events. Additionally, if you use option 1 and the administrator node is having an issue and is unable to send the log entries you will not be able to see this in Elasticsearch or Kibana.

A better option is to use option 2 even though this will require you to install the File Beats on multiple servers.

Tuesday, February 27, 2018

Oracle Linux - digging into YUM repository XML data

Everyone using Oracle Linux will have used, at one point in time, the yum command to install additional tooling required. For most, it will be simply consuming yum to get new things in or to update the distribution while connected to the public YUM repository from Oracle and everyone who is using it will see that new data is being fetched to ensure you have proper information on the latest versions available. for those wondering what is inside of the data that is being fetched, you can simply have a curious look inside of the xml files that your machine is gathering.

To make things more easy, yum is using XML that is stored on the yum server and it will use this meta data whenever you request an update or want to do an installation. There is however a wealth of information in the files you might not be aware is available.

Even though there are commands that will help you get this information out in a clean format, it is good to also have a look at the more raw form in case you want to build your own logic around it. As an example, we will be downloading the other.xml.gz file for Oracle Linux 7 from the Oracle Public YUM repository server.

if we extract the file and start exploring the content of the file you will see a lot of information is available. The below screenshot shows the information of one package, in our case the source package for jing-trang. we use jing-trang just as an example for no specific reason, it is a schema validation and conversion based on RELAX NG.

If we now start to explore the content of this specific package we have obtained from the other.xml.gz file we downloaded from the Oracle Linux YUM repository we see that a lot of information is available.

Interestingly we see that for example all the changelog information is available and we can see who the developer is. Having stated that, the developer or rather author is somewhat of a misleading term. It is the author of the package, which means, it is not perse the author of the code. In any case, it is helping you to find who is behind a certain piece of code and it helps you to get more insight in what has changed per version.

Even though this should be in the release notes it can be an additional set of information. One of the reasons you might be interested in the more raw form of information is, as an example, you want to be able to collect and visualize insights in changes to the packages your enterprise uses (as it was the case I started to look into this)

Oracle JET - stop building server side user interfaces rendering

For years enterprises have been building large monolitical applications with large server side components and large user side components in it. We see a change and a move away from monolith application design, modern day applications are developed in a microservice manner. As part of the microservice architecture REST based APIs are becoming the standard for interaction between the different microservices within a wider service and between different services within an enterprise.

Surprisingly user interfaces are still being developed relatively common based upon a more traditional way. It is not uncommon to see deployments in which an application is designed to have the full server to server communication based upon REST APIs and the user interface is developed in a way that an application server is connecting to the backend REST APIs and will generate the user interface to be presented to the end user.

The model of using an application server-side rendering of HTML based upon data in the backend systems is a model which has been adopted form the model in which the application server would connect to backend databases primarily. When having a backend based upon APIs rather than direct database connectivity there is no direct need to have this old model in place.

As the above diagram shows, the traditional way is collecting data from the database and have the application server render HTML content which contains both the look and feel as well as the data in one. The below images outlines the more API and micro services driven architecture.

When using, as an example, Oracle JET you will have your “static” Javascript code and HTML being provided to the end-user from simple webserver without any “logic”. As companies move more and more to flexible and container based infrastructures it a good practice is to server the Oracle JET code from a Docker container running nothing more than a simple NGINX webserver on an Oracle Linux docker base image.

In effect, serving your Oracle JET code from an Oracle Linux Docker base image with NGINX provides you with a small webserver footprint that can be scaled upwards and downwards when needed.

The Javascript provided as part of the Oracle JET application will ensure that the client (the browser of the client), and not he server, will request the required information from a REST API. Meaning, instead of an application server collecting the data from the database, generating a HTML page containing the data and sending this to the workstation of the end-user the model is now working differently. The client itself will communicate with a REST API to collect the data when needed.

The advantage of this model is that developing the relative simple Javascript based Oracle JET application can be done in a fast manner and will base itself on REST APIs. The REST APIs can be used for this specific application and can also be used for other applications. The level of re-use in combination with ease and speed of development provides that new application and changes can be implemented and put to use much faster than when building monolitical and large application server based solutions.

Saturday, February 24, 2018

Oracle JET - use clean JSON array data

Oracle Jet can be used to build modern web based applications. As part of the Oracle Jet project a set of examples and cookbooks have been created by Oracle which can be used to learn and build upon. The examples work in general perfectly good, however, at some places some optimizations can be done.  One of the examples is how data is put into the examples for the graphs. Building on an example for the bubble graph we have the below data set (in the original form):

The above example will work perfectly correct. As you can see it is a JSON like structure. If I run the above in JSONlint it will state it is not a valid JSON structure.

If we change the above into the below, we will have a valid JSON structure. Using a valid JSON structure is a better solution and will make things more easy when you use a REST API to get the data instead of using static data.

As the functionality is the same, the best way is to ensure we use a valid JSON structure. 

Thursday, February 15, 2018

Oracle Linux - Nginx sendfile issues

Nginx is currently a very popular http server and might take over the popular place currently hold by Apache HTTPD. For all good reasons Nginx is becoming very popular as it is blazing fast and is build without the legacy Apache is having. It is build with the modern day requirement set for a http server. One of the things that support the speed of Nginx is the sendfile option.

Sendfile is by default enabled, and if you run a production server that will also provide end users with static files you do want this enabled for sure.

Nginx initial fame came from its awesomeness at sending static files. This has lots to do with the association of sendfile, tcp_nodelay and tcp_nopush in nginx.conf. The sendfile Nginx option enables to use of sendfile for everything related to… sending file.
sendfile allows to transfer data from a file descriptor to another directly in kernel space. sendfile allows to save lots of resources:
sendfile is a syscall, which means execution is done inside the kernel space, hence no costly context switching.
sendfile replaces the combination of both read and write.
here, sendfile allows zero copy, which means writing directly the kernel buffer from the block device memory through DMA.
Even though sendfile is great when you are running your production server you can run into issues when you are doing development. In one of my development setup I am running Oracle Linux 7 in a vagrant box on my Macbook to run an instance on Nginx. I use the default /vagrant mountpoint from within the Oracle Linux vagrant box towards the filesystem of my Macbook as the location to server html files.

Main reason I do like this setup is that I can edit all files directly on my Macbook and have Nginx serve them for testing. The issue being is that Nginx is not always noticing that I changed a html file and keeps serving the old version of the html file instead of the changed version.

As it turns out, the issue is the way sendfile is buffering files and checking for changes on the file. As it is not the Oracle Linux operating system that makes the change to the file but the OS from my Macbook it is not noticed always in the right manner. This is causing a lot of issues while making interactive changes to html code.

Solution for this issue is, disable senfile in Ngnix. You can do so by adding the below line to your Ngnix config file:

sendfile  off;

After this is done, your issue should be resolved and Ngnix should start to provide you updated content instead of buffered content.  

Wednesday, February 14, 2018

Oracle to expand Dutch data centre ‘significantly’

As reported by : Oracle plans to expand its Dutch data centre ‘significantly’ to meet demand for its integrated cloud services, the Financieele Dagblad newspaper said on Tuesday.  Financial details were not disclosed but data centres generally cost several hundred million euros, the paper said.

This is Oracle’s second large investment in the Netherlands in a short time. Two years ago the company set up a new sales office in Amsterdam to cover Scandinavia, the Benelux and Germany, marketing Oracle’s cloud services to companies and institutions. The office has a payroll of 450 people, of whom 75% come from abroad.

The Netherlands is popular as a data storage centre. At the end of 2016, Google opened a large new €600m data storage centre in Eemshaven. Microsoft is planning to spend €2bn on a new centre in Wieringermeer while Equinix of the US is to open a new €160m centre in Amsterdam’s  Science Park, near the Oracle facility.

Oracle Scaleup Program to support startups

Whenever you are working on a startup you know you can use every help you can get. Being it "simple" money to keep going, being it mentoring from someone in the industry or a veteran in doing business or getting help in a other ways. Other ways can be people and companies who believe in your startup and support you by providing office space, providing equipment or helping in getting that first customer to sign up with your startup.

If you ever worked at a startup you known that the first periode of your startup is crucial. And even if your product or service is brilliant, getting it from an idea to something that can be delivered is hard. People sometimes have a romantic idea around starting a startup, in reality it is very (very very) hard work where you are happy with every bit of support you can can get.

Oracle is expaning their support to the Startup community and recently announced this (also) via twitter with the below tweet:
Are you part of a #startup? Then, you don't want to miss this news: @Oracle is expanding its global startup ecosystem to reach more #entrepreneurs worldwide. Say hello to the new Virtual Global #Scaleup Program! @OracleSCA 
In effect the Oracle ScaleUp program supports startups and Oracle announced the expansion of its global startup ecosystem in an effort to increase the impact and support for the wider startup community, reach more entrepreneurs worldwide, and drive cloud adoption and innovation. The expansion includes the launch of a new virtual-style, non-residential global program, named Oracle Scaleup Ecosystem, as well as the addition of Austin to the residential Oracle Startup Cloud Accelerator program. The addition of Austin brings the residential program to North America and expands the accelerator’s reach to nine total global locations.

If you are a startup, getting the help from Oracle might be vital in becoming a success story. And as an entrepreneur you will recognize the fact that all support is welcome in the vital startup stage of your startup. Reaching out to Oracle and the Scaleup program might be a good idea.