Logstash – Process Log File Once and Exit/Stop Logstash After Reading Log File Once

            I have a scenario in Logstash where I want to process a log file in a path once in a day. In my case, a new log file will be added to a path which needs to be processed once and it takes roughly around 10 minutes to process the entire file. I do not want the Logstash process to be running round the clock to process just a small and single log file. Usually we can configure Logstash to read logs from a location using input file filter. This makes the Logstash process to run round the clock. There is a work around to make Logstash, to process the given file and exit from the process. i.e. finish the process.

./logstash agent -e 'input { stdin {} } output { ANY_OUTPUT_FILTER }' < /file_path/test.log


./logstash agent -f /tmp/local/logstash.conf < /file_path/test.log

            Logstash -e option let us give the Logstash configuration as inline text and -f let us give the configuration as file. 


Related Links :

Web Application for Elasticsearch :
  1. ElasticTab – Elasticsearch to Excel Report (Web Application)
Elasticsearch Plugin:
  1. Elasticsearch Plugin To Generate (Save and E-Mail) Excel Reports
  1. Execute Multiple Search Query in Elasticsearch
  2. Monitor Elasticsearch Servers with Shell Script - E-Mail Notification
  3. Execute Raw Elasticsearch Query using Transport Client – Java API
  4. Elasticsearch – Apply Nested Filter on Nested (Inner) Aggregation
  5. Execute Multiple Search Query in Elasticsearch
  6. Enable CORS to Send Cross Domain Request to Elasticsearch using AJAX
  7. Elasticsearch Java API – Get Index List
  8. Elasticsearch Java API – Get Alias List
  9. Elasticsearch Java API - Get Type List from given Index
  10. Elasticsearch Java API – Get Field List for a given Index and Type
  11. Elasticsearch Java API – Get Index Type List Mapping
  12. Elasticsearch – Use Script Filter/Conditon in Aggregation/Sub-Aggreagtion
  13. Elasticsearch – Compare/ScriptFilter/Condition on Two Fields using Script Filter – REST Query + Java API
  14. Elasticsearch - Date/Time(String)  Add/Subtract Duration - Days,Months,Years,Hours,Minutes,Seconds
  15. Elasticsearch Geo-Shape Slow Indexing Performance - Solved
  16. Chrome Elasticsearch Sense Not Working – Solved
  1. Logstash – Process Log File Once and Exit/Stop Logstash After Reading Log File Once
  2. Measure Logstash Performance using Metrics Filter – Issue/Error in Syntax (Unknown setting ‘message’ for stdout)
  3. Logstash – Process Same Log File (File Input) from Beginning/Start
  4. Create Custom Filter/Plugin to Emit New Events Manually in Logstash
Logstash and Elasticsearch:
  1. Query Elasticsearch Cluster in Filter Section when using Logstash
  2. Custom Elasticsearch Index Name/Type Name based on Events in Logstash
MongoDB and Elasticsearch:
  1. Import Data from Mongo DB to Elasticsearch using Elasticsearch River


8 thoughts on “Logstash – Process Log File Once and Exit/Stop Logstash After Reading Log File Once

  1. ishita

    Hello Raghavendar,

    Your method also does not work in my case. Can you provide some details
    1. The logstash configuration file should not contain any input filter now (since we are providing the log file path on command line)?
    2. Also, once i execute the above specified command the process, the process runs and exits in 15-20 seconds. Why?

    Let me know in case you need any other information.

    1. admin@280392

      Consider file input filter which takes in file/folder as input. Logstash process will continuously tail the file for new content. If new content exist, Logstash will process it. This is quite normal. Consider a case where you have a file which has some 1000 lines and you are sure that no new content will be added to that file. Using the file input filter in this case will make Logstash process the file and the that process will just run continuously. The process wont get killed/stopped after it has processed the file. The normal use case can be used when you have content added to a file endlessly. You can use my approach if you have some file to process only once. For me, I had a requirement to process a folder. Each day a log file will be added to that folder. I need to process that log file only once since that file will not have content added. You got it now? Also there is no straight forward way to make Logstash to process a log file only once and exit using file input filter.

  2. Roman

    This definitely worked for me after some trial and error. I’m using a custom config file ‘test.conf’ and had to define an input filter in the config file as follows:

    input {
    stdin {}

    The command to read in my file ‘test.log’ via logstash then looked like: /opt/logstash/bin/logstash -f test.conf < test.log

    After parsing the contents of the file, the logstash process shutdown successfully.

    Before I started defining the input filter in the config file like above, the logstash process would startup and quickly shutdown without parsing the contents of the file, like ishita described.

  3. coolgeekthings

    The key here is not to put the word “agent” as a command line parameter to logstash if you don’t want it to run as a continuous service. In summary, logstash does support batch mode natively, all you have to do is use the stdin input plugin, pipe in the file, and don’t run it in agent mode.

  4. SKumar

    This post is awesome, it worked for me.

    All i have to do is use the below in my config file

    input {
    stdin {}

    Then use logstash/bin/logstash -f test.conf < test.log

    Where test.log is the file path that needs to be processed by logstash

    Thank you so much for the article

  5. Ketan

    I have a case where I am using log stash for reindexing huge data(big index to small monthly indexes). Input and output both have same elastic url. How to use workaround to exit logstash after reindexing finishes?


Leave a Reply