Wednesday, February 28, 2018

Weblogic - log file monitoring with Elastic

When developing a central logging solution, with for example Elastic, and you are applying this on a WebLogic centric environment you will have to understand how the WebLogic logging is being done in a cluster configuration. If done in the wrong manner you will flood your Elastic solution with a large number of double records.

In effect, each Weblogic server will generate logging, this logging is written to a local file on the operating system where the Weblogic server is running. Additionally the log file is being send by the Weblogic server logger process to the Domain Log Broadcaster. All messages with the exception of the messages marked as debug will be send through a filter to the domain logger on the Weblogic admin node.



The Domain Logger on the admin server takes the log records from all nodes in the cluster and the logging from the admin server and push this into a single consolidated domain log file (after filtering). Additionally the local log files are also written to a local server log file.

When using FileBeat from Elastic you will have to indicate which files need to be watched by FileBeat. In general there are a couple of options.

  1. Put the domain log file under watch of Elastic FileBeat and have all the records being written to this file send to Elasticsearch.
  2. Put all the individual Server Log Files under watch of Elastic FileBeat and have all the records being written to this file send to Elasticsearch.


When applying option 1 you run into the issue of missing log records as filtering is (can) be applied in the chain of events. Additionally, if you use option 1 and the administrator node is having an issue and is unable to send the log entries you will not be able to see this in Elasticsearch or Kibana.

A better option is to use option 2 even though this will require you to install the File Beats on multiple servers.

No comments: