ArcGIS Enterprise log data
Procedure
-
Executing the statements from
dev-console/ct-arcgis-logfile.txt
in the Kibana Dev Console -
Importing Kibana Dashboards, Queries and Index Patterns from
kibana/ct-arcgis/export.ndjson
file -
Configuring the logstash pipeline
ct-arcgis-logfile
-
Setting up the ingest pipeline
ingest/ct-monitor-arcgis-parse-servicename.txt
(see below) -
Configuring Filebeat on the ArcGIS Enterprise Host to poll the log files on a regular basis (see below)
-
Verify ArcGIS Log Level is set correctly (see below)
Establishment of the Elastic ingest pipeline
The ingest pipeline extracts ArcGIS service names from the log message for those events where the ags.target
field is not already populated.
Filebeat configuration - Notes
The Filebeat component must be installed for each ArcGIS host that is to be involved in collecting the log data. Currently Filebeat 7.x is supported, with Filebeat 8.x no problems could be found yet.
The Filebeat configuration is then done on the basis of the template filebeat/arcgis-logfile/filebeat.yml
.
filebeat.inputs:
- type: log
enabled: true
paths:
- c:\arcgisserver\logs\*\server\*.log
- c:\arcgisserver\logs\*\services\*\*.log
- c:\arcgisserver\logs\*\services\*\*\*.log
- c:\arcgisserver\logs\*\services\System\*\*.log
fields:
type: server
multiline.pattern: '^<Msg([^>]*?)>(.*)'
multiline.negate: true
multiline.match: after
output.logstash:
hosts: ["logstash.host:5604"]
fields:
env: PROD
Select the value under fields/type from server , portal , datastore to get better filtering possibilities in Kibana. The same applies to fields/env to distinguish between different stages.
|