We have updated our Terms of Service, Code of Conduct, and Addendum.

Where do I find the backpressure graphs?

2»

Answers

  • LovetheBeach
    LovetheBeach Posts: 54 ✭✭

    Thanks Harry. I did earlier this afternoon. Waiting to hear back.

  • <@U02MYTESJ31&gt;, I don't see any errors with the actual runs of your collection jobs in your `default` worker group. And you're `NADTC2` doesn't have any jobs configured as far as I can see

  • LovetheBeach
    LovetheBeach Posts: 54 ✭✭

    Yep. Thanks for confirming.:slightly_smiling_face:

  • What issue are you currently seeing, aside from what <@UUP82DJHE&gt; aside those Job Errors and that the artifacts were removed?

  • LovetheBeach
    LovetheBeach Posts: 54 ✭✭

    Sorry <@U01SH7MGNH4&gt;, I'm not clear on your question? We don't believe artifacts have been removed, that are causing these errors. I think that's the issue. I am seeing errors for all of our REST Sources in Cribl's Cloud. This just recently occurred, since yesterday.

  • LovetheBeach
    LovetheBeach Posts: 54 ✭✭

    What Cribl can't find, actually exists: { "time": "2023-03-23T21:40:50.386Z", "cid": "api", "channel": "JobStore", "level": "error", "message": "Error reading stats file for job: http://1677107100.574.scheduled.in|1677107100.574.scheduled.in_REST_Okta_Prod_Internal", "stack": "Error: ENOENT: no such file or directory, open '/opt/cribl_data/failover/state/jobs/default/1677107100.574.scheduled.in_REST_Okta_Prod_Internal/stats.json'", "errno": -2, "code": "ENOENT", "syscall": "open", "path": "/opt/cribl_data/failover/state/jobs/default/1677107100.574.scheduled.in_REST_Okta_Prod_Internal/stats.json", "conflictingFields": { "message": "ENOENT: no such file or directory, open '/opt/cribl_data/failover/state/jobs/default/1677107100.574.scheduled.in_REST_Okta_Prod_Internal/stats.json'" } }

  • LovetheBeach
    LovetheBeach Posts: 54 ✭✭

    These errors are being generated for all of our Rest Collectors in Cribl's Cloud. ^

  • Checking

  • LovetheBeach
    LovetheBeach Posts: 54 ✭✭

    <@U01SH7MGNH4&gt; I'm calling it a night. Talk with you tomorrow.

  • I'll snag the support case and continue to look. Think I know what's going on, and looking to confirm

  • LovetheBeach
    LovetheBeach Posts: 54 ✭✭

    <@U01SH7MGNH4&gt; Cribl Support Case: 00006878

  • LovetheBeach
    LovetheBeach Posts: 54 ✭✭

    <@U01SH7MGNH4&gt; Hi Prescott. I replied back to the Support ticket, but thought I would try to ask here: Will we experience an outage while these orphaned jobs are removed? Also, do we need to stop making changes to Cribl while these orphaned jobs are removed? Finally, how do we prevent this from happening again?

    1. Nope, no outage! 2. You can continue to make changes at your convenience 3. At this moment I am unsure what caused the orphaned jobs - I will still need to look into this and see what answers I can provide
  • LovetheBeach
    LovetheBeach Posts: 54 ✭✭

    Thanks <@U01SH7MGNH4&gt; For #3, this is a concern because it means it may happen again. *Not* that we don't want you to clear these up. It's just that we may experience "deja vu". Here's something I shared with my colleague, Mike H., regarding some strange behavior that I believe started occurring after we upgraded to 4.1: After the upgrade, I "think" I've noticed situations where I change a configuration, to a Function or whatever, and the configuration looks changed after I commit/deploy, but the behavior (via Captures) appears as though the config was *never* changed. In order to have the changes actually take affect, I've had to delete the config (a Function or whatever) and re-add it to include the changes to the config I tried to modify on the original. If something like that is happening elsewhere, in our Cribl environment, I would not be surprised if backend dependencies become missing and generate these ("can't find blah blah blah...") errors. (aka broken dependencies) Again, because I experienced this while in the middle of something and just needed to just get it done, this statement may not be completely accurate. But we have noticed some strangeness after the upgrade.