We have updated our Terms of Service, Code of Conduct, and Addendum.

Access Splunk UF meta data

Options

splunk uf internal logs are picked up by a passthru pipeline in cribl. based on index.startsWith('_') for the route filter. That works fine.
the problem, i lose all meta information about the splunk ufs. like version and os. Can this be prevented somehow?
I just see all the cribl workers and some machines (HF) that are sending data directly to splunk


Original post was from https://cribl-community.slack.com/archives/CPYBPK65V/p1693930511929089

Tagged:

Best Answer

  • Jon Rust
    Jon Rust Posts: 439 mod
    Answer ✓
    Options

    internal fields can be accessed in pipelines like any other, but to view them you need to do as Tony shows above

Answers

  • Jon Rust
    Jon Rust Posts: 439 mod
    Options

    I believe you can find meta data in internal fields (in recent versions of cribl)

  • Mike Dupuis
    Mike Dupuis Posts: 14 admin
    Options

    I will check it. but there is no setting i've to enable in cribl?

  • Tony Reinke - Cribl
    Options
  • Jon Rust
    Jon Rust Posts: 439 mod
    Answer ✓
    Options

    internal fields can be accessed in pipelines like any other, but to view them you need to do as Tony shows above

  • Mike Dupuis
    Mike Dupuis Posts: 14 admin
    Options

    I see this information in the worker log files

    but i do not see it in any capture. I captured before pre-route for splunk_tcp_in. It's not in _raw, meta fields or any internal field.

  • Anson VanDoren
    Options

    that info is only sent by the forwarders at connection time (and is when you see it logged in your screenshot above). We don't currently append it to each event coming in over that connection

  • Mike Dupuis
    Mike Dupuis Posts: 14 admin
    Options

    Ok. That’s of course not ideal if you want to check in splunk what kind of version you have in your network with 20-30k of forwarders. Would it be feasible to make sure information available as an option?

  • Anson VanDoren
    Options

    I believe what others are doing to accomplish this now is using the Cribl Logs source to pick up this information and route it to a Splunk destination for monitoring. If that's not a feasible option for you, though, it's probably worth bringing it up in #feature-request w/ additional info about why that solution doesn't work for you. IIRC, there were a few proposed solutions earlier this year when we added the metadata to the connection log, and we settled on this one because it met the requirements of the customers at the time who were requesting to have this additional metadata

  • Anson VanDoren
    Options

    If you do raise it as a FR, you may want to reference CRIBL-5397 so that PMs can reference the internal discussion around the existing implementation