Collect Code42 Incydr core datasets

Supported in:
Google secops SIEM

This document explains how to ingest Code42 Incydr core datasets (Users, Sessions, Audit, Cases, and optionally File Events) to Google Security Operations using Amazon S3.

Before you begin

  • Google SecOps instance
  • Privileged access to Code42 Incydr
  • Privileged access to AWS (S3, IAM, Lambda, EventBridge)

Collect source prerequisites (IDs, API keys, org IDs, tokens)

  1. Sign in to the Code42 Incydr web UI.
  2. Go to Administration > Integrations > API Clients.
  3. Create a new Client.
  4. Copy and save the following details in a secure location:
    1. Client ID.
    2. Client Secret.
    3. Base URL: (for example, https://api.us.code42.com, https://api.us2.code42.com, https://api.ie.code42.com, https://api.gov.code42.com).

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create an Amazon S3 bucket following this user guide: Creating a bucket.
  2. Save the bucket Name and Region for later use.
  3. Create a user following this user guide: Creating an IAM user.
  4. Select the created User.
  5. Select the Security credentials tab.
  6. Click Create Access Key in the Access Keys section.
  7. Select Third-party service as the Use case.
  8. Click Next.
  9. Optional: add a description tag.
  10. Click Create access key.
  11. Click Download CSV file to save the Access Key and Secret Access Key for later use.
  12. Click Done.
  13. Select the Permissions tab.
  14. Click Add permissions in the Permissions policies section.
  15. Select Add permissions.
  16. Select Attach policies directly
  17. Search for and select the AmazonS3FullAccess policy.
  18. Click Next.
  19. Click Add permissions.

Set up AWS Lambda for polling Code42 Incydr (no transform)

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:
    • Name: Enter a unique and meaningful name (for example, code42-incydr-pull)
    • Runtime: Select Python 3.13.
    • Permissions: Select a role with s3:PutObject and Cloudwatch.
  4. Click Create function.
  5. Select Configuration > General configuration > Edit.
  6. Configure Timeout=5m and Memory=1024 MB.
  7. Click Save.
  8. Select Configuration > Environment variables > Edit > Add.
    1. INCYDR_BASE_URL = https://api.us.code42.com
    2. INCYDR_CLIENT_ID = <Client ID>
    3. INCYDR_CLIENT_SECRET = <Client Secret>
    4. S3_BUCKET = code42-incydr
    5. S3_PREFIX = code42/
    6. PAGE_SIZE = 500
    7. LOOKBACK_MINUTES = 60
    8. STREAMS = users,sessions,audit,cases
    9. Optional: FE_ADV_QUERY_JSON = ``
    10. Optional: FE_PAGE_SIZE = 1000
  9. Click Save.
  10. Select Code and enter the following Python code:

    importbase64,json,os,time
    fromdatetimeimport datetime, timedelta, timezone
    fromurllib.parseimport urlencode
    fromurllib.requestimport Request, urlopen
    importboto3
    BASE = os.environ["INCYDR_BASE_URL"].rstrip("/")
    CID = os.environ["INCYDR_CLIENT_ID"]
    CSECRET = os.environ["INCYDR_CLIENT_SECRET"]
    BUCKET = os.environ["S3_BUCKET"]
    PREFIX_BASE = os.environ.get("S3_PREFIX", "code42/")
    PAGE_SIZE = int(os.environ.get("PAGE_SIZE", "500"))
    LOOKBACK_MINUTES = int(os.environ.get("LOOKBACK_MINUTES", "60"))
    STREAMS = [s.strip() for s in os.environ.get("STREAMS", "users").split(",") if s.strip()]
    FE_ADV_QUERY_JSON = os.environ.get("FE_ADV_QUERY_JSON", "").strip()
    FE_PAGE_SIZE = int(os.environ.get("FE_PAGE_SIZE", "1000"))
    s3 = boto3.client("s3")
    defnow_utc():
     return datetime.now(timezone.utc)
    defiso_minus(minutes: int):
     return (now_utc() - timedelta(minutes=minutes)).strftime("%Y-%m-%dT%H:%M:%SZ")
    defput_bytes(key: str, body: bytes):
     s3.put_object(Bucket=BUCKET, Key=key, Body=body)
    defput_json(prefix: str, page_label: str, data):
     ts = now_utc().strftime("%Y/%m/%d/%H%M%S")
     key = f"{PREFIX_BASE}{prefix}{ts}-{page_label}.json"
     put_bytes(key, json.dumps(data).encode("utf-8"))
     return key
    defauth_header():
     auth = base64.b64encode(f"{CID}:{CSECRET}".encode()).decode()
     req = Request(f"{BASE}/v1/oauth", data=b"", method="POST")
     req.add_header("Authorization", f"Basic {auth}")
     req.add_header("Accept", "application/json")
     with urlopen(req, timeout=30) as r:
     data = json.loads(r.read().decode())
     return {"Authorization": f"Bearer {data['access_token']}", "Accept": "application/json"}
    defhttp_get(path: str, params: dict | None = None, headers: dict | None = None):
     url = f"{BASE}{path}"
     if params:
     url += ("?" + urlencode(params))
     req = Request(url, method="GET")
     for k, v in (headers or {}).items():
     req.add_header(k, v)
     with urlopen(req, timeout=60) as r:
     return r.read()
    defhttp_post_json(path: str, body: dict, headers: dict | None = None):
     url = f"{BASE}{path}"
     req = Request(url, data=json.dumps(body).encode("utf-8"), method="POST")
     req.add_header("Content-Type", "application/json")
     for k, v in (headers or {}).items():
     req.add_header(k, v)
     with urlopen(req, timeout=120) as r:
     return r.read()
    # USERS (/v1/users)
    defpull_users(hdrs):
     next_token = None
     pages = 0
     while True:
     params = {"active": "true", "blocked": "false", "pageSize": PAGE_SIZE}
     if next_token:
     params["pgToken"] = next_token
     raw = http_get("/v1/users", params, hdrs)
     data = json.loads(raw.decode())
     put_json("users/", f"users-page-{pages}", data)
     pages += 1
     next_token = data.get("nextPgToken") or data.get("next_pg_token")
     if not next_token:
     break
     return pages
    # SESSIONS (/v1/sessions) — alerts live inside sessions
    defpull_sessions(hdrs):
     start_iso = iso_minus(LOOKBACK_MINUTES)
     next_token = None
     pages = 0
     while True:
     params = {
     "hasAlerts": "true",
     "startTime": start_iso,
     "pgSize": PAGE_SIZE,
     }
     if next_token:
     params["pgToken"] = next_token
     raw = http_get("/v1/sessions", params, hdrs)
     data = json.loads(raw.decode())
     put_json("sessions/", f"sessions-page-{pages}", data)
     pages += 1
     next_token = data.get("nextPgToken") or data.get("next_page_token")
     if not next_token:
     break
     return pages
    # AUDIT LOG (/v1/audit) — CSV export or paged JSON; write as received
    defpull_audit(hdrs):
     start_iso = iso_minus(LOOKBACK_MINUTES)
     next_token = None
     pages = 0
     while True:
     params = {"startTime": start_iso, "pgSize": PAGE_SIZE}
     if next_token:
     params["pgToken"] = next_token
     raw = http_get("/v1/audit", params, hdrs)
     try:
     data = json.loads(raw.decode())
     put_json("audit/", f"audit-page-{pages}", data)
     next_token = data.get("nextPgToken") or data.get("next_page_token")
     pages += 1
     if not next_token:
     break
     except Exception:
     ts = now_utc().strftime("%Y/%m/%d/%H%M%S")
     key = f"{PREFIX_BASE}audit/{ts}-audit-export.bin"
     put_bytes(key, raw)
     pages += 1
     break
     return pages
    # CASES (/v1/cases)
    defpull_cases(hdrs):
     next_token = None
     pages = 0
     while True:
     params = {"pgSize": PAGE_SIZE}
     if next_token:
     params["pgToken"] = next_token
     raw = http_get("/v1/cases", params, hdrs)
     data = json.loads(raw.decode())
     put_json("cases/", f"cases-page-{pages}", data)
     pages += 1
     next_token = data.get("nextPgToken") or data.get("next_page_token")
     if not next_token:
     break
     return pages
    # FILE EVENTS (/v2/file-events/search) — enabled only if you provide FE_ADV_QUERY_JSON
    defpull_file_events(hdrs):
     if not FE_ADV_QUERY_JSON:
     return 0
     try:
     base_query = json.loads(FE_ADV_QUERY_JSON)
     except Exception:
     raise RuntimeError("FE_ADV_QUERY_JSON is not valid JSON")
     pages = 0
     next_token = None
     while True:
     body = dict(base_query)
     body["pgSize"] = FE_PAGE_SIZE
     if next_token:
     body["pgToken"] = next_token
     raw = http_post_json("/v2/file-events/search", body, hdrs)
     data = json.loads(raw.decode())
     put_json("file_events/", f"fileevents-page-{pages}", data)
     pages += 1
     next_token = (
     data.get("nextPgToken")
     or data.get("next_page_token")
     or (data.get("file_events") or {}).get("nextPgToken")
     )
     if not next_token:
     break
     return pages
    defhandler(event, context):
     hdrs = auth_header()
     report = {}
     if "users" in STREAMS:
     report["users_pages"] = pull_users(hdrs)
     if "sessions" in STREAMS:
     report["sessions_pages"] = pull_sessions(hdrs)
     if "audit" in STREAMS:
     report["audit_pages"] = pull_audit(hdrs)
     if "cases" in STREAMS:
     report["cases_pages"] = pull_cases(hdrs)
     if "file_events" in STREAMS:
     report["file_events_pages"] = pull_file_events(hdrs)
     return report
    deflambda_handler(event, context):
     return handler(event, context)
    
  11. Click Deploy.

Create an EventBridge schedule

  1. In the AWS Console, go to Amazon EventBridge > Rules.
  2. Click Create rule.
  3. Provide the following configuration details:
    • Schedule pattern: Select Fixed rate of 1 hour.
    • Name: Enter a unique and meaningful name (for example, code42-incydr-hourly).
    • Target: Select Lambda function and choose code42-incydr-pull.
  4. Click Create rule

Optional: Create read-only IAM user & keys for Google SecOps

  1. In the AWS Console, go to IAM > Users, then click Add users.
  2. Provide the following configuration details:
    • User: Enter a unique name (for example, secops-reader)
    • Access type: Select Access key - Programmatic access
    • Click Create user.
  3. Attach minimal read policy (custom): Users > select secops-reader > Permissions > Add permissions > Attach policies directly > Create policy
  4. In the JSON editor, enter the following policy:

    {
    "Version":"2012-10-17",
    "Statement":[
    {
    "Effect":"Allow",
    "Action":[
    "s3:GetObject"
    ],
    "Resource":"arn:aws:s3:::<your-bucket>/*"
    },
    {
    "Effect":"Allow",
    "Action":[
    "s3:ListBucket"
    ],
    "Resource":"arn:aws:s3:::<your-bucket>"
    }
    ]
    }
    
  5. Set the name to secops-reader-policy.

  6. Go to Create policy > search/select > Next > Add permissions.

  7. Go to Security credentials > Access keys > Create access key.

  8. Download the CSV (these values are entered into the feed).

Configure a feed in Google SecOps to ingest the Code42 Incydr log

  1. Go to SIEM Settings > Feeds.
  2. Click Add New Feed.
  3. In the Feed name field, enter a name for the feed (for example, Code42 Incydr Datasets).
  4. Select Amazon S3 V2 as the Source type.
  5. Select Code42 Incydr as the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://code42-incydr/code42/
    • Source deletion options: Select the deletion option according to your preference.
    • Maximum File Age: Default 180 Days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace.
    • Ingestion labels: The label to be applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalize screen, and then click Submit.

Need more help? Get answers from Community members and Google SecOps professionals.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年11月03日 UTC.