We have updated our Terms of Service, Code of Conduct, and Addendum.

Is anyone sending PerfmonMetrics through Cribl?

we're doing a POC for cribl and I'm wondering if anyone is sending PerfmonMetrics through Cribl. I tried it by filtering the route for __inputId=='splunk:in_splunk_tcp' && index=='win_em_metrics' through the passthru pipeline and even though I"m seeing them in the capture before the destination they aren't showing up in the metrics index win_em_metrics like I'd expect.

«1

Answers

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭

    do you see them as Metrics in Cribl? They should get that little M icon

  • jlstanley
    jlstanley Posts: 21

    no but they're coming in from the UF through normal port 9997 communications so they shouldn't get converted to metrics until they hit the props/transforms on the indexers since they are regular PerfmonMetrics inputs I would think.

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭
    edited October 2023

    Well, and now you learn one of the most important things in this POC :wink:

    Cribl is like a HF. It does all the work. The indexers will consider everything from Cribl as "processed" and will only save to disk

    no IDX props/transforms are getting applied here

  • jlstanley
    jlstanley Posts: 21

    well damn. so basically any data source we bring through cribl, we need to redo everything the TA on the indexer would have done.

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭

    correct

  • jlstanley
    jlstanley Posts: 21

    alright, thanks.

  • David Maislin
    David Maislin Posts: 228 mod

    I recall we had a customer with Splunk App for Infrastructure where we send to HEC so that the app could still convert the logs to metrics in Splunk.

  • jlstanley
    jlstanley Posts: 21

    ok. we're using IT work essentials but the same props/transforms would apply so I might try that. I'm wondering if there's any real benefit to it sending it through Cribl in that windows metrics use case though or is it technically just acting as a passthrough and maybe adding an additional Cribl field like cribl_pipe

  • David Maislin
    David Maislin Posts: 228 mod

    Let me know if it works and perhaps you could come up with a cool use case :slightly_smiling_face:

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭

    Start at the start: Why are you sending through Cribl? :slightly_smiling_face:

  • jlstanley
    jlstanley Posts: 21

    Ideally to show how Cribl can save us money/ingest :slightly_smiling_face:

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭

    That usually means dropping events or aggregating data, or removing partial unnecessary data from an event... which means you'd at least need to parse the event in Cribl, to figure what you want to do with it

  • David Maislin
    David Maislin Posts: 228 mod

    Cause everything is better with Cribl! Even with passthru, just being able to manage everything in the UI, see real-time preview of events in the stream, and make the life of a Splunk admin better, it all adds up to value that can't be measured with just a calculation.

  • jlstanley
    jlstanley Posts: 21
    edited October 2023

    agreed except for the passthru still "cooking" the data from the indexer point of view. I was thinking initially that we'd be able to just send whatever datasource we wanted to Cribl and just use passthru to decide if we could streamline, drop or transform the data until I got my bubble burst with the information about indexers not apply their normal props/transforms to that sourcetype anymore because it's now cooked which in essence means we'd have to redo all the index time extractions and transforms if we were to do that. I can see where it makes sense on data sources that don't have good TA's already or even no TA's for it but it just means we need to pick and choose more carefully about what we plan on sending through Cribl.

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭

    I mean, most TAs do pretty limited index-time work. Set timestamp, maybe split sourcetype using a few regex, and that's often it, everything else is search time

  • jlstanley
    jlstanley Posts: 21

    that brings up a question, is there anyway to send from Cribl but not "cook" the data so the indexers still do their normal transforms other than sending it via HEC?

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭
    edited October 2023

    Well, you can send it as syslog or anything comparably ugly :slightly_smiling_face:

    but I'd really not advise to do that :slightly_smiling_face:

    and there's a dirty hack to force Splunk to re-parse, but you're clearly stepping into unsupported territory there

  • jlstanley
    jlstanley Posts: 21
    edited October 2023

    what "hack" are you referring to?

    just curious

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭

    some setting in inputs.conf that effectively tells Splunk to send received data to a different point in the processing pipeline

  • David Maislin
    David Maislin Posts: 228 mod

    Yeah, I crashed my Splunk doing that hack lol!

  • jlstanley
    jlstanley Posts: 21

    gotcha

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭
    edited October 2023

    Honestly, I'd once bite into the sour apple (literal translation of German saying), and adapt your TAs, and that's it

    it's usually less work than you think

  • jlstanley
    jlstanley Posts: 21
    edited October 2023

    I'm going through the admin training now on http://university.cribl.io now to get better handle on efficiently doing that so we'll see how it goes.

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭

    if you're coming from Splunk, and you think of "doing all that shit in props + transforms" - it's SOOO much easier in Cribl :smile:

  • jlstanley
    jlstanley Posts: 21

    I am coming from splunk. But I've already done the props/transforms work for most things so I'm mainly not wanting to break currently working data sources in the process of moving them to Cribl until I get a good handle on Cribl event manipulation so all the relevant fields still get a extracted properly.

  • David Maislin
    David Maislin Posts: 228 mod

    Cribl is the Cherry On the Top of the Splunk Sundae!

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭

    someone turned on advertisement mode on <@U01C35EMQ01&gt; :smile:

  • David Maislin
    David Maislin Posts: 228 mod

    :ice_cream:

  • xpac xpac
    xpac xpac Posts: 148 ✭✭✭

    but as I said... usually most fields are extracted at search time, and index time processing is rather limited. also, I think you'll get a handle on how to properly use Cribl very quickly :slightly_smiling_face:

  • morrisnky
    morrisnky Posts: 16

    You can get metrics events to work with Splunk IT Essentials and publish metrics function in Stream, bit fiddly, but I have done it. Unfortunately, customer was in a closed environment, so could not export pack. I echo what the esteemed community brethren have stated above. Most Splunk parsing is done at search time > as it makes sense for schema-on-fly. However, there are some example where this does not work great. Lookups are a key one - I much prefer managing these within Stream.