We have updated our Terms of Service, Code of Conduct, and Addendum.

Anyone every pulled in events from Box?

Anyone every pulled in events from Box? We use the Box add on for Splunk today and have started having some issues with it. Pretty sure it is API based.

Answers

  • Raanan Dagan
    Raanan Dagan Posts: 101 mod

    Hi Shawn, I have not tried this option before .. but I see all these curl commands / REST Api options: https://developer.box.com/reference/get-collections/ I hope that will do the trick

  • nicktank
    nicktank Posts: 26 mod

    <@UJYBKAR2Q&gt; we actually have done this internally. We hit the box access APIs, dump the events in s3 and then query using Search. The use case internally was to see which documents are being accessed the most often but there is a lot more data there. Let me get with the group that did this and see about the details.

  • Shawn Cannon
    Shawn Cannon Posts: 131 ✭✭

    Ok we use the box add on for Splunk right now, I just wanted some options to pull in without it. Still would need to send it to Splunk right now

  • nicktank
    nicktank Posts: 26 mod

    Wouldn't be an issue, we just used s3 as it's the best place for Cribl search.

  • Oliver Hoppe
    Oliver Hoppe Posts: 50 ✭✭

    Hi <@UJYBKAR2Q&gt; It is not possible to integrate BOX with Cribl. Since BOX works on Oauth2.0 which is currently not supported by Cribl. Oauth 2.0 works only with office365 in cribl. I used Splunk add-on for Box and it was working fine.

  • Steve Litras
    Steve Litras Posts: 12 admin

    Nick is correct - we've done this internally. we use the Oauth2 token end point to login with the client id, client secret, grant type (client_credentials), box_subject_type (enterprise) and box_subject_id (our box enterprise id).

  • Steve Litras
    Steve Litras Posts: 12 admin

    the pagination was a bit tricky to get right (the bruise on my head from banging it against the wall healed eventually :slightly_smiling_face: ).

  • Steve Litras
    Steve Litras Posts: 12 admin

    we run it on a 5 minute schedule, and as Nick mentioned, we feed the data to an S3 bucket (we do some shaping and enrich the data with org info), which is configured in our search instance.