How can I get the default `@timestamp` on elastic output?
Elasticsearch output question. How can I get the default `@timestamp`; on elastic output? It seems like `_time` should be automatically be renamed to `@timestamp`; .
Answers
-
When the data lands in Elasticsearch you are still seeing the `_time` field?
0 -
_time is automatically converted to `@timestamp`; on the outgoing event sent to elastic.... but it happens to data on the wire vs. what u see in stream.
0 -
I've converted my index to TSDS and the output doesn't show any problem, but nothing shows up in elasticsearch. When I check the output of the pipeline (live), I don't see @timestamp.
0 -
The live view will still show `_time` . Are you seeing the number of documents within Elasticsearch increment?
0 -
No - without errors
0 -
But if I try on the console, I do get a 400 error code
0 -
```{ "error": { "root_cause": [ { "type": "illegal_argument_exception", "reason": "Error extracting data stream timestamp field: Failed to parse object: expecting token of type [START_OBJECT] but found [null]" } ], "type": "illegal_argument_exception", "reason": "Error extracting data stream timestamp field: Failed to parse object: expecting token of type [START_OBJECT] but found [null]", "caused_by": { "type": "parsing_exception", "reason": "Failed to parse object: expecting token of type [START_OBJECT] but found [null]", "line": 25, "col": 1 } }, "status": 400 }```
0 -
I believe my problem is my template, but it's weird that the worker is not reporting the failures
0 -
The worker won't get an error as Elasticsearch has accepted the data, and then failed to parse it
0 -
Would be nice to see the proper output on the live view...
0 -
> Would be nice to see the proper output on the live view... We happen to have an open feature request for this, to show data as it's sent on the wire
0 -
I'm not sure how it accepted the data. Putting the timestamp in got me this output: ```{ "error": { "root_cause": [ { "type": "illegal_argument_exception", "reason": "the document timestamp [1970-01-20T10:31:51.000Z] is outside of ranges of currently writable indices [[2023-03-21T18:34:05.000Z,2023-03-23T19:10:00.261Z]]" } ], "type": "illegal_argument_exception", "reason": "the document timestamp [1970-01-20T10:31:51.000Z] is outside of ranges of currently writable indices [[2023-03-21T18:34:05.000Z,2023-03-23T19:10:00.261Z]]" }, "status": 400 }```
0 -
Which format of @timestamp is Cribl using?
0 -
```"@timestamp";:"2023-03-22T19:03:10.814Z"``` ISO Format
0 -
Hmmm. I'm using `strict_date_optional_time` which seems to be proper
0 -
<@U0410L186KS> Just confirming, this is a datastream you are sending to, rather than a standard index?
0 -
Correct
0 -
I didn't see the option to select datastream but it said the output supports it
0 -
I'm not doing that anywhere, but I do know that are differences in the Logstash config when sending to a datastream
0 -
I posted this and that worked: ```POST /interfaces-sensors-ptx/_doc { "name": "interfaces", "@timestamp";: "2023-03-23T19:10:00.261Z", "_time": 1679511521, "carrier-transitions": 319 ... }```
0 -
There is differences, let me check the documentation to see if there is a way to "create" instead of "index"
0 -
Could not find anything specific. Looks like I might need to "sniff" traffic to see what's going on.
0 -
You can force it with Logstash using `action: create` .
0 -
Or you can set data_stream: true in newer version which auto extract where you want stuff to go.https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-data_stream
0 -
The test that Harry did above looks to be correctly using the create action to the bulkapi, what version of Cribl are you on?
0 -
4.1.0
0 -
Fairly new....
0 -
Maybe try and get the actual output from your Cribl by sniffing, and try running that in the console to see what errors come up.
0 -
I think the problem is the creation of an _id
0