Collect Duo Telephony logs

Supported in:
Google secops SIEM

This document explains how to ingest Duo Telephony logs to Google Security Operations using Amazon S3. The parser extracts fields from the logs, transforms and maps them to the Unified Data Model (UDM). It handles various Duo log formats, converting timestamps, extracting user information, network details, and security results, and finally structuring the output into the standardized UDM format.

Before you begin

Make sure you have the following prerequisites:

  • Google SecOps instance.
  • Privileged access to Duo Admin Panel with Owner role.
  • Privileged access to AWS (S3, Identity and Access Management (IAM), Lambda, EventBridge).

Collect Duo prerequisites (API credentials)

  1. Sign in to the Duo Admin Panel as an administrator with the Owner role.
  2. Go to Applications > Application Catalog.
  3. Locate the entry for Admin API in the catalog.
  4. Click + Add to create the application.
  5. Copy and save in a secure location the following details:
    • Integration Key
    • Secret Key
    • API Hostname (for example, api-yyyyyyyy.duosecurity.com)
  6. In the Permissions section, deselect all the permission options except Grant read log.
  7. Click Save Changes.

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucket following this user guide: Creating a bucket
  2. Save bucket Name and Region for future reference (for example, duo-telephony-logs).
  3. Create a User following this user guide: Creating an IAM user.
  4. Select the created User.
  5. Select Security credentials tab.
  6. Click Create Access Key in section Access Keys.
  7. Select Third-party service as Use case.
  8. Click Next.
  9. Optional: Add a description tag.
  10. Click Create access key.
  11. Click Download .CSV file to save the Access Key and Secret Access Key for future reference.
  12. Click Done.
  13. Select the Permissions tab.
  14. Click Add permissions in section Permissions policies.
  15. Select Add permissions.
  16. Select Attach policies directly.
  17. Search for AmazonS3FullAccess policy.
  18. Select the policy.
  19. Click Next.
  20. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies.
  2. Click Create policy > JSON tab.
  3. Copy and paste the following policy.
  4. Policy JSON (replace duo-telephony-logs if you entered a different bucket name):

    {
    "Version":"2012-10-17",
    "Statement":[
    {
    "Sid":"AllowPutObjects",
    "Effect":"Allow",
    "Action":"s3:PutObject",
    "Resource":"arn:aws:s3:::duo-telephony-logs/*"
    },
    {
    "Sid":"AllowGetStateObject",
    "Effect":"Allow",
    "Action":"s3:GetObject",
    "Resource":"arn:aws:s3:::duo-telephony-logs/duo-telephony/state.json"
    }
    ]
    }
    
  5. Click Next > Create policy.

  6. Go to IAM > Roles > Create role > AWS service > Lambda.

  7. Attach the newly created policy.

  8. Name the role duo-telephony-lambda-role and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:

    Setting Value
    Name duo-telephony-logs-collector
    Runtime Python 3.13
    Architecture x86_64
    Execution role duo-telephony-lambda-role
  4. After the function is created, open the Code tab, delete the stub and paste the following code (duo-telephony-logs-collector.py).

    importjson
    importboto3
    importos
    importhmac
    importhashlib
    importbase64
    importurllib.parse
    importurllib.request
    importemail.utils
    fromdatetimeimport datetime, timedelta, timezone
    fromtypingimport Dict, Any, List, Optional
    frombotocore.exceptionsimport ClientError
    s3 = boto3.client('s3')
    deflambda_handler(event, context):
    """
     Lambda function to fetch Duo telephony logs and store them in S3.
     """
     try:
     # Get configuration from environment variables
     bucket_name = os.environ['S3_BUCKET']
     s3_prefix = os.environ['S3_PREFIX'].rstrip('/')
     state_key = os.environ['STATE_KEY']
     integration_key = os.environ['DUO_IKEY']
     secret_key = os.environ['DUO_SKEY']
     api_hostname = os.environ['DUO_API_HOST']
     # Load state
     state = load_state(bucket_name, state_key)
     # Calculate time range
     now = datetime.now(timezone.utc)
     if state.get('last_offset'):
     # Continue from last offset
     next_offset = state['last_offset']
     logs = []
     has_more = True
     else:
     # Start from last timestamp or 24 hours ago
     mintime = state.get('last_timestamp_ms', 
     int((now - timedelta(hours=24)).timestamp() * 1000))
     # Apply 2-minute delay as recommended by Duo
     maxtime = int((now - timedelta(minutes=2)).timestamp() * 1000)
     next_offset = None
     logs = []
     has_more = True
     # Fetch logs with pagination
     total_fetched = 0
     max_iterations = int(os.environ.get('MAX_ITERATIONS', '10'))
     while has_more and total_fetched < max_iterations:
     if next_offset:
     # Use offset for pagination
     params = {
     'limit': '1000',
     'next_offset': next_offset
     }
     else:
     # Initial request with time range
     params = {
     'mintime': str(mintime),
     'maxtime': str(maxtime),
     'limit': '1000',
     'sort': 'ts:asc'
     }
     # Make API request with retry logic
     response = duo_api_call_with_retry(
     'GET',
     api_hostname,
     '/admin/v2/logs/telephony',
     params,
     integration_key,
     secret_key
     )
     if 'items' in response:
     logs.extend(response['items'])
     total_fetched += 1
     # Check for more data
     if 'metadata' in response and 'next_offset' in response['metadata']:
     next_offset = response['metadata']['next_offset']
     state['last_offset'] = next_offset
     else:
     has_more = False
     state['last_offset'] = None
     # Update timestamp for next run
     if logs:
     # Get the latest timestamp from logs
     latest_ts = max([log.get('ts', '') for log in logs])
     if latest_ts:
     # Convert ISO timestamp to milliseconds
     dt = datetime.fromisoformat(latest_ts.replace('Z', '+00:00'))
     state['last_timestamp_ms'] = int(dt.timestamp() * 1000) + 1
     else:
     has_more = False
     # Save logs to S3 if any were fetched
     if logs:
     timestamp = datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')
     key = f"{s3_prefix}/telephony_{timestamp}.json"
     # Format logs as newline-delimited JSON
     log_data = '\n'.join(json.dumps(log) for log in logs)
     s3.put_object(
     Bucket=bucket_name,
     Key=key,
     Body=log_data.encode('utf-8'),
     ContentType='application/x-ndjson'
     )
     print(f"Saved {len(logs)} telephony logs to s3://{bucket_name}/{key}")
     else:
     print("No new telephony logs found")
     # Save state
     save_state(bucket_name, state_key, state)
     return {
     'statusCode': 200,
     'body': json.dumps({
     'message': f'Successfully processed {len(logs)} telephony logs',
     'logs_count': len(logs)
     })
     }
     except Exception as e:
     print(f"Error: {str(e)}")
     return {
     'statusCode': 500,
     'body': json.dumps({'error': str(e)})
     }
    defduo_api_call_with_retry(method: str, host: str, path: str, params: Dict[str, str], 
     ikey: str, skey: str, max_retries: int = 3) -> Dict[str, Any]:
    """
     Make an authenticated API call to Duo Admin API with retry logic.
     """
     for attempt in range(max_retries):
     try:
     return duo_api_call(method, host, path, params, ikey, skey)
     except Exception as e:
     if '429' in str(e) or '5' in str(e)[:1]: # Rate limit or server error
     if attempt < max_retries - 1:
     wait_time = (2 ** attempt) * 2 # Exponential backoff
     print(f"Retrying after {wait_time} seconds...")
     importtime
     time.sleep(wait_time)
     continue
     raise
    defduo_api_call(method: str, host: str, path: str, params: Dict[str, str], 
     ikey: str, skey: str) -> Dict[str, Any]:
    """
     Make an authenticated API call to Duo Admin API.
     """
     # Create canonical string for signing using RFC 2822 date format
     now = email.utils.formatdate()
     canon = [now, method.upper(), host.lower(), path]
     # Add parameters
     args = []
     for key in sorted(params.keys()):
     val = params[key]
     args.append(f"{urllib.parse.quote(key,'~')}={urllib.parse.quote(val,'~')}")
     canon.append('&'.join(args))
     canon_str = '\n'.join(canon)
     # Sign the request
     sig = hmac.new(
     skey.encode('utf-8'),
     canon_str.encode('utf-8'),
     hashlib.sha1
     ).hexdigest()
     # Create authorization header
     auth = base64.b64encode(f"{ikey}:{sig}".encode('utf-8')).decode('utf-8')
     # Build URL
     url = f"https://{host}{path}"
     if params:
     url += '?' + '&'.join(args)
     # Make request
     req = urllib.request.Request(url)
     req.add_header('Authorization', f'Basic {auth}')
     req.add_header('Date', now)
     req.add_header('Host', host)
     req.add_header('User-Agent', 'duo-telephony-s3-ingestor/1.0')
     try:
     with urllib.request.urlopen(req, timeout=30) as response:
     data = json.loads(response.read().decode('utf-8'))
     if data.get('stat') == 'OK':
     return data.get('response', {})
     else:
     raise Exception(f"API error: {data.get('message','Unknown error')}")
     except urllib.error.HTTPError as e:
     error_body = e.read().decode('utf-8')
     raise Exception(f"HTTP error {e.code}: {error_body}")
    defload_state(bucket: str, key: str) -> Dict[str, Any]:
    """Load state from S3."""
     try:
     response = s3.get_object(Bucket=bucket, Key=key)
     return json.loads(response['Body'].read().decode('utf-8'))
     except ClientError as e:
     if e.response.get('Error', {}).get('Code') in ('NoSuchKey', '404'):
     return {}
     print(f"Error loading state: {e}")
     return {}
     except Exception as e:
     print(f"Error loading state: {e}")
     return {}
    defsave_state(bucket: str, key: str, state: Dict[str, Any]):
    """Save state to S3."""
     try:
     s3.put_object(
     Bucket=bucket,
     Key=key,
     Body=json.dumps(state).encode('utf-8'),
     ContentType='application/json'
     )
     except Exception as e:
     print(f"Error saving state: {e}")
    
  5. Go to Configuration > Environment variables.

  6. Click Edit > Add new environment variable.

  7. Enter the environment variables provided in the following table, replacing the example values with your values.

    Environment variables

    Key Example value
    S3_BUCKET duo-telephony-logs
    S3_PREFIX duo-telephony/
    STATE_KEY duo-telephony/state.json
    DUO_IKEY <your-integration-key>
    DUO_SKEY <your-secret-key>
    DUO_API_HOST api-yyyyyyyy.duosecurity.com
    MAX_ITERATIONS 10
  8. After the function is created, stay on its page (or open Lambda > Functions > duo-telephony-logs-collector).

  9. Select the Configuration tab.

  10. In the General configuration panel, click Edit.

  11. Change Timeout to 5 minutes (300 seconds) and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:
    • Recurring schedule: Rate (1 hour).
    • Target: your Lambda function duo-telephony-logs-collector.
    • Name: duo-telephony-logs-1h.
  3. Click Create schedule.

(Optional) Create read-only IAM user & keys for Google SecOps

  1. Go to AWS Console > IAM > Users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: Enter secops-reader.
    • Access type: Select Access key – Programmatic access.
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. JSON:

    {
    "Version":"2012-10-17",
    "Statement":[
    {
    "Effect":"Allow",
    "Action":["s3:GetObject"],
    "Resource":"arn:aws:s3:::duo-telephony-logs/*"
    },
    {
    "Effect":"Allow",
    "Action":["s3:ListBucket"],
    "Resource":"arn:aws:s3:::duo-telephony-logs"
    }
    ]
    }
    
  7. Name = secops-reader-policy.

  8. Click Create policy > search/select > Next > Add permissions.

  9. Create access key for secops-reader: Security credentials > Access keys.

  10. Click Create access key.

  11. Download the .CSV. (You'll paste these values into the feed).

Configure a feed in Google SecOps to ingest Duo Telephony logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed name field, enter a name for the feed (for example, Duo Telephony logs).
  4. Select Amazon S3 V2 as the Source type.
  5. Select Duo Telephony Logs as the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://duo-telephony-logs/duo-telephony/
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default is 180 days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace.
    • Ingestion labels: The label applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalize screen, and then click Submit.

UDM mapping table

Log field UDM mapping Logic
context metadata.product_event_type Directly mapped from the context field in the raw log.
credits security_result.detection_fields.value Directly mapped from the credits field in the raw log, nested under a detection_fields object with the corresponding key credits.
eventtype security_result.detection_fields.value Directly mapped from the eventtype field in the raw log, nested under a detection_fields object with the corresponding key eventtype.
host principal.hostname Directly mapped from the host field in the raw log if it's not an IP address. Set to a static value of "ALLOW" in the parser. Set to a static value of "MECHANISM_UNSPECIFIED" in the parser. Parsed from the timestamp field in the raw log, which represents seconds since epoch. Set to "USER_UNCATEGORIZED" if both context and host fields are present in the raw log. Set to "STATUS_UPDATE" if only host is present. Otherwise, set to "GENERIC_EVENT". Directly taken from the raw log's log_type field. Set to a static value of "Telephony" in the parser. Set to a static value of "Duo" in the parser.
phone principal.user.phone_numbers Directly mapped from the phone field in the raw log.
phone principal.user.userid Directly mapped from the phone field in the raw log. Set to a static value of "INFORMATIONAL" in the parser. Set to a static value of "Duo Telephony" in the parser.
timestamp metadata.event_timestamp Parsed from the timestamp field in the raw log, which represents seconds since epoch.
type security_result.summary Directly mapped from the type field in the raw log.

Need more help? Get answers from Community members and Google SecOps professionals.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年11月03日 UTC.