Configuration
ColdBox Installation
Since this module utilizes other module dependencies, which are designed to work within the Coldbox framework, it may only be used within the context of a Coldbox application.
By just installing the module a LogBox Logstash appender will be registered to capture all messages of FATAL or ERROR severity and will ship those logs to an Elasticsearch time-series DataStream
Configuration
The cbElasticsearch
module is bundled in the installation of this module so, if you are utilizing a direct connection, you will need to first configure the Elasticsearch connection in config/Coldbox.cfc
. See cbElasticsearch configuration.
The default configuration structure for the module looks like this. Note that environment variables or java properties may be provided to configure the module with adding any additional code to your config/Coldbox.cfc
file:
The environment variable names are noted above in the getSystemSetting
methods. For clarity, they are:
LOGSTASH_APPLICATION_NAME
- The application name to transmit with all log entriesLOGSTASH_ENABLE_API
- disable or enable the API endpointLOGSTASH_ENABLE_APPENDERS
- disable or enable the application appenders ( built-in error logging )LOGSTASH_TRANSMISSION_METHOD
-direct
orapi
LOGSTASH_API_URL
- The url of your logstash API serviceLOGSTASH_API_WHITELIST
- Regex for host IP addresses allowed to transmit messagesLOGSTASH_API_TOKEN
- A user-provided token for ensuring permissability betwen the client and API serverLOGSTASH_LEVEL_MIN
- A minimum log level for the appender -FATAL
is probably the best choice.LOGSTASH_LEVEL_MAX
- The max level to log. Defaults toERROR
, but could be set lower ( e.g.WARN
) if more logging output is desired.LOGSTASH_DATASTREAM
- The name of the time-series Data Stream to use for your logs. Defaults tologs-coldbox-logstash-appender
LOGSTASH_DATASTREAM_PATTERN
- The index pattern for the the backing component/index templates to use. In most cases, you will not need to provide this.LOGSTASH_ILMPOLICY
- The name of the ILM policy to use for your data stream. In most cases, you will not need to provide this.LOGSTASH_COMPONENT_TEMPLATE
- The name of the component template to use for the index template. In most cases, you will not need to provide this.LOGSTASH_INDEX_TEMPLATE
- The name of the index template to apply to your datastream. In most cases, you will not need to provide this.LOGSTASH_RETENTION_DAYS
- The number of days to retain log data. Defaults to 365 days.LOGSTASH_INDEX_SHARDS
- The number of shards to use for indices created by the data stream. Defaults to 1.LOGSTASH_INDEX_REPLICAS
- The number of replicas to use for indices created by the data stream. Defaults to 0.LOGSTASH_MIGRATE_V2
- If this variable and the the below variable are provided, the appender registration will attempt to migrate your data from the v2 indices to the new data stream in v3LOGSTASH_INDEX_PREFIX
- Backward compatibility field for v2. If this key is present and themigrateIndices
Transmission Modes
As noted above, this module may be used with either a direct connection to an elasticsearch server ( configured in your Coldbox application or via environment variables ) or it can transmit to a microservice version of itself via API. There are two valid transmission modes: direct
( default ) and api
. In the case of the former, messages are logged directly to an Elasticsearch server via the cbElasticsearch
module. In the case of the latter, you will need to supply configuration options for the API endpoint to be used in logging messages.
Direct
For a direct configuration, with no API enabled, our settings would be the following:
API
Because direct is the default the above configuration only disables the API. No need to pass in additional configuration options. For an API transmission, our configuration becomes a little more complex:
Note that the token is provided by you. The token on the client must match the token on the receiving microservice, however, so this is an excellent use case for environment variables.
Microservice configuration
If you are planning on running a separate instance to receive log messages, you can deploy a Coldbox application, with only the logstash module installed, as a microservice. In this case, our configuration would need to whitelist the IP of the client or allow all addresses to transmit with an apiWhitelist
value of '*'. An example configuration for this microservice might be:
Custom Lifcycle Policy
You may also provide a custom lifecycle policy to the module. This will supercede the default lifecycle of a simple deletion after 365 days. This must be supplied as a JSON representation of the policy or as a Policy object. If you supply your own policy, the retentionDays
setting will not be applied, as you will have to supply it yourself. Example with three phases after the initial "hot" phase:
For more information on ILM policies, see the documentation.
User Info Closure
A custom user information closure may be provided in your module configuration. This allows you to append additional information about the state of the error and/or your application ( see the log schema section below ).
If a struct or array, is returned, it is serialized as JSON in the userinfo
key of the log entry. You may return any string, as well. Let's say we wanted to capture the URL
scope, the user's id and the server state information with every logged message. We could provide the UDF like so:
Note that the userInfoUDF
is designed to fail softly - so as to prevent error messages from being generated from error logging. As such, if the closure fails you will see this message in the userInfo
key: An error occurred when attempting to run the userInfoUDF provided. The message received was [ message text of error thrown ]
Index naming conventions
By default, the indexes created and used by the Logstash module use the following prefix: .logstash-[ lower-cased, alphanumeric application name]
. The .logstash-
prefix is a convention used by the ELK stack to denote two things:
The index is non-public
The index contains logs.
Tools like Kibana will automatically filter logging indices by looking for this name.
You may change the default prefix used for logging indices with the indexPrefix
key in the module settings, or by providing a LOGSTASH_INDEX_PREFIX
environment variable.
Last updated