Prepare Elasticsearch Stack via Python
The creation of indices, templates, ingest pipelines, index lifecycle templates and dashboard objects can easily be done via the provided python script. So a lot of the manual work can be reduced.
python-Environment and Wheel package
The script is being delivered as a wheel package. The package needs to be installed in a local python 3 environment. When choosing the place to run the scripts make sure the target Elasticsearch instance is reachable via http.
A detailled description to set up the wheel package can be found here: resources/analytics/python/monitor_setup/README.md
.
Configuration of parameters
Once the environment is operable, the provided config.json
needs to be setup with URLs and credentials.
{
"common": {
"local_imports_dir": "<PATH TO resources/analytics/elasticsearch>",
"overwrite_objects": false
},
"elasticsearch": {
"url": "http://elastic-host.example.com:9200",
"username": "elastic",
"password": "<elastic_pwd>"
},
"kibana": {
"url": "http://kibana-host.example.com:5601"
}
}
The script reads the object data provided with this installation and sends the data to Elasticsearch and Kibana.
The configuration parameter local_imports_dir
specifies the path to the objects dir.
Executing the script
The script is being executed in the command line of your Python environment. If just FME Flow related objects shall be installed the additional parameter t
is going to be used.
python -m monitor_setup -c C:\data\config.json -a full
python -m monitor_setup -c C:\data\config.json -a full -t FME
Watch the command line’s output to understand about the success of the operation.