Where do I find the backpressure graphs?
Where do I find the backpressure graphs? I normally stumble-into them but not having luck this am....
Answers
-
This can be seen in the destination's 'charts' tab, the bottom chart, blocked status chart will show a redline (value 1) when back pressuring.
0 -
<@UUP82DJHE> Is this for Stream?
0 -
yes
0 -
I'm asking about Stream.
0 -
Managed -> Data -> Destinations -> <The Destination to check> -> Charts
0 -
Can you please give me the menu options you're using to get to it? I don't see this.
0 -
Got it
0 -
Also u can see this in Monitoring -> Data -> Destinations as seen here.
0 -
It appears we're not experiencing backpressure:
0 -
Here's an better example of a destination back pressuring for: Managed -> Data -> Destinations -> <The Destination to check> -> Charts
0 -
Must be another problem.
0 -
Doesn't look like it. However we're getting several errors for our Cribl Collectors that are trying to pull Okta logs.
0 -
Just curious, is there an error on the status screen and or logs?
0 -
message:Error reading stats file for job: http://1677098100.249.scheduled.in|1677098100.249.scheduled.in_REST_ProofPoint_TAP message:Error reading stats file for job: http://1677098100.250.scheduled.in|1677098100.250.scheduled.in_REST_Okta_Prod_External
0 -
Sorry, I should of specified more clearly. The screenshot above from from a Google Chronicle Destination. Are there errors being report for that destination (in the logs tab) that lead you to believe there is back pressure?
0 -
Actually, I think all of our Cribl Collectors are being impacted right now.
0 -
Just that "Backpressure" has been a common symptom when we experience receiving several email notifications for Cortex Data Lake disconnect notifications.
0 -
There doesn't appear to be errors for the Cribl_to_Chronicle Destination:
0 -
Lots of errors for our Collectors:
0 -
{ "time": "2023-03-23T18:07:54.507Z", "cid": "api", "channel": "rest:jobs", "level": "error", "message": "API Error", "error": { "message": "Failed to find job with id=in_REST_ProofPoint_TAP", "stack": "RESTError: Failed to find job with id=in_REST_ProofPoint_TAP
at L._get (/opt/cribl/bin/cribl.js:14:20679999)" }, "url": "/jobs/in_REST_ProofPoint_TAP" }0 -
If u expand on of the API errors, there should be additional information, like http code and reason that will provide more insight
0 -
They all appear to be like that, except the errors are for each REST Collector we have set-up.
0 -
Take that back, this is a case of a job artifact being deleted from the UI but the artifact has already been deleted... so it's an error but not related to the job running successfully or not. I'd check on status of these collector runs in Monitoring -> System -> Job Inspector -> Scheduled
0 -
~By chance is the file system full? TBH I've never seen this error.~
0 -
<@UUP82DJHE> For "All Groups":
0 -
For Cribl Cloud (Worker Group: default):
0 -
For on-prem Cribl Workers (Worker Group: NADTC2):
0 -
The Jobs for the "default" Group look okay, I think? ^
0 -
I don't think it's because of a missing Collector config. For example:
0 -
<@U02MYTESJ31> - I'd suggest opening a support case for this, definitely something strange going on.
0