Is anyone sending PerfmonMetrics through Cribl?
we're doing a POC for cribl and I'm wondering if anyone is sending PerfmonMetrics through Cribl. I tried it by filtering the route for __inputId=='splunk:in_splunk_tcp' && index=='win_em_metrics' through the passthru pipeline and even though I"m seeing them in the capture before the destination they aren't showing up in the metrics index win_em_metrics like I'd expect.
Answers
-
do you see them as Metrics in Cribl? They should get that little M icon
0 -
no but they're coming in from the UF through normal port 9997 communications so they shouldn't get converted to metrics until they hit the props/transforms on the indexers since they are regular PerfmonMetrics inputs I would think.
0 -
Well, and now you learn one of the most important things in this POC
Cribl is like a HF. It does all the work. The indexers will consider everything from Cribl as "processed" and will only save to disk
no IDX props/transforms are getting applied here
0 -
well damn. so basically any data source we bring through cribl, we need to redo everything the TA on the indexer would have done.
0 -
correct
0 -
alright, thanks.
0 -
I recall we had a customer with Splunk App for Infrastructure where we send to HEC so that the app could still convert the logs to metrics in Splunk.
0 -
ok. we're using IT work essentials but the same props/transforms would apply so I might try that. I'm wondering if there's any real benefit to it sending it through Cribl in that windows metrics use case though or is it technically just acting as a passthrough and maybe adding an additional Cribl field like cribl_pipe
0 -
Let me know if it works and perhaps you could come up with a cool use case :slightly_smiling_face:
0 -
Start at the start: Why are you sending through Cribl? :slightly_smiling_face:
0 -
Ideally to show how Cribl can save us money/ingest :slightly_smiling_face:
0 -
That usually means dropping events or aggregating data, or removing partial unnecessary data from an event... which means you'd at least need to parse the event in Cribl, to figure what you want to do with it
0 -
Cause everything is better with Cribl! Even with passthru, just being able to manage everything in the UI, see real-time preview of events in the stream, and make the life of a Splunk admin better, it all adds up to value that can't be measured with just a calculation.
0 -
agreed except for the passthru still "cooking" the data from the indexer point of view. I was thinking initially that we'd be able to just send whatever datasource we wanted to Cribl and just use passthru to decide if we could streamline, drop or transform the data until I got my bubble burst with the information about indexers not apply their normal props/transforms to that sourcetype anymore because it's now cooked which in essence means we'd have to redo all the index time extractions and transforms if we were to do that. I can see where it makes sense on data sources that don't have good TA's already or even no TA's for it but it just means we need to pick and choose more carefully about what we plan on sending through Cribl.
0 -
I mean, most TAs do pretty limited index-time work. Set timestamp, maybe split sourcetype using a few regex, and that's often it, everything else is search time
0 -
that brings up a question, is there anyway to send from Cribl but not "cook" the data so the indexers still do their normal transforms other than sending it via HEC?
0 -
Well, you can send it as syslog or anything comparably ugly :slightly_smiling_face:
but I'd really not advise to do that :slightly_smiling_face:
and there's a dirty hack to force Splunk to re-parse, but you're clearly stepping into unsupported territory there
0 -
what "hack" are you referring to?
just curious
0 -
some setting in inputs.conf that effectively tells Splunk to send received data to a different point in the processing pipeline
0 -
Yeah, I crashed my Splunk doing that hack lol!
0 -
gotcha
0 -
Honestly, I'd once bite into the sour apple (literal translation of German saying), and adapt your TAs, and that's it
it's usually less work than you think
0 -
I'm going through the admin training now on http://university.cribl.io now to get better handle on efficiently doing that so we'll see how it goes.
0 -
if you're coming from Splunk, and you think of "doing all that shit in props + transforms" - it's SOOO much easier in Cribl
0 -
I am coming from splunk. But I've already done the props/transforms work for most things so I'm mainly not wanting to break currently working data sources in the process of moving them to Cribl until I get a good handle on Cribl event manipulation so all the relevant fields still get a extracted properly.
0 -
Cribl is the Cherry On the Top of the Splunk Sundae!
0 -
someone turned on advertisement mode on <@U01C35EMQ01>
0 -
:ice_cream:
0 -
but as I said... usually most fields are extracted at search time, and index time processing is rather limited. also, I think you'll get a handle on how to properly use Cribl very quickly :slightly_smiling_face:
0 -
You can get metrics events to work with Splunk IT Essentials and publish metrics function in Stream, bit fiddly, but I have done it. Unfortunately, customer was in a closed environment, so could not export pack. I echo what the esteemed community brethren have stated above. Most Splunk parsing is done at search time > as it makes sense for schema-on-fly. However, there are some example where this does not work great. Lookups are a key one - I much prefer managing these within Stream.
0