Operations & Analyst Templates
Workflow templates for operations professionals and data analysts who need to automate ETL pipelines, generate reports, sync data across systems, and monitor infrastructure—all from the terminal.
ETL Pipeline: Stripe → Postgres → Data Warehouse
Extract payment data from Stripe, transform it, and load it into your data warehouse for analysis.
Use Case
Daily ETL job that pulls transaction data from Stripe, normalizes it, stores it in Postgres, and syncs to your data warehouse for BI tool access.
Workflow Configuration
name: stripe-etl-pipeline
description: Daily Stripe data extraction and transformation
trigger:
type: schedule
cron: "0 2 * * *" # Daily at 2am
steps:
- name: extract_stripe_charges
type: http
config:
method: GET
url: https://api.stripe.com/v1/charges?created[gte]=${YESTERDAY_TIMESTAMP}&limit=100
headers:
Authorization: Bearer ${STRIPE_SECRET_KEY}
output: charges
- name: extract_stripe_customers
type: http
config:
method: GET
url: https://api.stripe.com/v1/customers?created[gte]=${YESTERDAY_TIMESTAMP}&limit=100
headers:
Authorization: Bearer ${STRIPE_SECRET_KEY}
output: customers
- name: transform_and_load_charges
type: cli
config:
command: |
psql ${DATABASE_URL} <<EOF
INSERT INTO charges (
id, customer_id, amount, currency, status, created_at
) VALUES ${charges.data.map(c =>
`('${c.id}', '${c.customer}', ${c.amount}, '${c.currency}', '${c.status}', to_timestamp(${c.created}))`
).join(',')}
ON CONFLICT (id) DO UPDATE SET
status = EXCLUDED.status,
updated_at = NOW();
EOF
output: charges_loaded
- name: sync_to_warehouse
type: http
config:
method: POST
url: https://api.fivetran.com/v1/connectors/${CONNECTOR_ID}/sync
headers:
Authorization: Bearer ${FIVETRAN_API_KEY}
Content-Type: application/json
output: warehouse_sync
- name: notify_completion
type: http
config:
method: POST
url: ${SLACK_WEBHOOK_URL}
body: |
{
"text": "✅ Stripe ETL completed",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*ETL Pipeline Complete*\n• Charges processed: ${charges.data.length}\n• Customers: ${customers.data.length}\n• Warehouse sync: ${warehouse_sync.status}"
}
}
]
}
Setup Instructions
-
Create the workflow:
loopcli loop create stripe-etl-pipeline -
Set your secrets:
loopcli loop secrets set STRIPE_SECRET_KEY loopcli loop secrets set DATABASE_URL loopcli loop secrets set FIVETRAN_API_KEY loopcli loop secrets set SLACK_WEBHOOK_URL -
Deploy with schedule:
loopcli loop deploy stripe-etl-pipeline --activate
Automated Weekly Reporting
Generate and distribute weekly business reports from your data warehouse to stakeholders.
Use Case
Every Monday morning, query your data warehouse for key metrics, generate a PDF report, and email it to stakeholders.
Workflow Configuration
name: weekly-business-report
description: Automated weekly report generation
trigger:
type: schedule
cron: "0 8 * * 1" # Mondays at 8am
steps:
- name: query_revenue_metrics
type: cli
config:
command: |
psql ${DATABASE_URL} -t -c "
SELECT
SUM(amount) as total_revenue,
COUNT(DISTINCT customer_id) as unique_customers,
AVG(amount) as avg_transaction
FROM charges
WHERE created_at >= NOW() - INTERVAL '7 days'
"
output: revenue
- name: query_user_growth
type: cli
config:
command: |
psql ${DATABASE_URL} -t -c "
SELECT
COUNT(*) as new_users,
COUNT(*) FILTER (WHERE subscription_status = 'active') as active_subscribers
FROM users
WHERE created_at >= NOW() - INTERVAL '7 days'
"
output: users
- name: query_top_products
type: cli
config:
command: |
psql ${DATABASE_URL} -t -c "
SELECT product_name, COUNT(*) as sales
FROM order_items
WHERE created_at >= NOW() - INTERVAL '7 days'
GROUP BY product_name
ORDER BY sales DESC
LIMIT 5
"
output: products
- name: generate_report_html
type: cli
config:
command: |
cat > /tmp/report.html <<EOF
<html>
<head><title>Weekly Business Report</title></head>
<body>
<h1>Weekly Business Report</h1>
<h2>Revenue Metrics</h2>
<ul>
<li>Total Revenue: \$${revenue.total_revenue}</li>
<li>Unique Customers: ${revenue.unique_customers}</li>
<li>Avg Transaction: \$${revenue.avg_transaction}</li>
</ul>
<h2>User Growth</h2>
<ul>
<li>New Users: ${users.new_users}</li>
<li>Active Subscribers: ${users.active_subscribers}</li>
</ul>
<h2>Top Products</h2>
<pre>${products.stdout}</pre>
</body>
</html>
EOF
- name: convert_to_pdf
type: cli
config:
command: wkhtmltopdf /tmp/report.html /tmp/report.pdf
- name: email_report
type: http
config:
method: POST
url: https://api.resend.com/emails
headers:
Authorization: Bearer ${RESEND_API_KEY}
Content-Type: application/json
body: |
{
"from": "reports@yourcompany.com",
"to": ["executives@yourcompany.com"],
"subject": "Weekly Business Report - Week of ${WEEK_START_DATE}",
"html": "<p>Please find attached the weekly business report.</p>",
"attachments": [
{
"filename": "weekly-report.pdf",
"content": "${base64encode(/tmp/report.pdf)}"
}
]
}
- name: archive_report
type: http
config:
method: POST
url: https://api.supabase.co/storage/v1/object/reports/weekly-${timestamp}.pdf
headers:
Authorization: Bearer ${SUPABASE_KEY}
body_file: /tmp/report.pdf
Database Backup Automation
Automatically backup your database, upload to S3, and verify integrity.
Workflow Configuration
name: database-backup
description: Automated database backup with verification
trigger:
type: schedule
cron: "0 3 * * *" # Daily at 3am
steps:
- name: create_backup
type: cli
config:
command: |
pg_dump ${DATABASE_URL} -F c -f /tmp/backup-${timestamp}.dump
output: backup
- name: upload_to_s3
type: cli
config:
command: |
aws s3 cp /tmp/backup-${timestamp}.dump s3://${BACKUP_BUCKET}/backups/backup-${timestamp}.dump
output: upload
- name: verify_backup
type: cli
config:
command: |
aws s3api head-object --bucket ${BACKUP_BUCKET} --key backups/backup-${timestamp}.dump
output: verification
- name: cleanup_old_backups
type: cli
config:
command: |
aws s3 ls s3://${BACKUP_BUCKET}/backups/ | head -n -7 | awk '{print $4}' | xargs -I {} aws s3 rm s3://${BACKUP_BUCKET}/backups/{}
- name: notify_success
type: http
config:
method: POST
url: ${SLACK_WEBHOOK_URL}
body: |
{
"text": "✅ Database backup completed: backup-${timestamp}.dump"
}
- name: alert_on_failure
type: http
condition: backup.exit_code != 0 || upload.exit_code != 0
config:
method: POST
url: ${PAGERDUTY_WEBHOOK_URL}
body: |
{
"routing_key": "${PAGERDUTY_KEY}",
"event_action": "trigger",
"payload": {
"summary": "Database backup failed",
"severity": "error",
"source": "loopcli-backup"
}
}
Infrastructure Monitoring & Alerting
Monitor server health, disk usage, and application metrics, with automatic alerts.
Workflow Configuration
name: infrastructure-health-check
description: Monitor infrastructure and alert on issues
trigger:
type: schedule
cron: "*/15 * * * *" # Every 15 minutes
steps:
- name: check_disk_usage
type: cli
config:
command: |
ssh ${SERVER_HOST} "df -h / | tail -1 | awk '{print \$5}' | sed 's/%//'"
output: disk
- name: check_memory_usage
type: cli
config:
command: |
ssh ${SERVER_HOST} "free | grep Mem | awk '{print (\$3/\$2) * 100.0}'"
output: memory
- name: check_app_health
type: http
config:
method: GET
url: https://yourapp.com/health
timeout: 10
output: health
- name: alert_disk_usage
type: http
condition: disk.stdout > 85
config:
method: POST
url: ${PAGERDUTY_WEBHOOK_URL}
body: |
{
"routing_key": "${PAGERDUTY_KEY}",
"event_action": "trigger",
"payload": {
"summary": "High disk usage: ${disk.stdout}%",
"severity": "warning"
}
}
- name: alert_memory_usage
type: http
condition: memory.stdout > 90
config:
method: POST
url: ${PAGERDUTY_WEBHOOK_URL}
body: |
{
"routing_key": "${PAGERDUTY_KEY}",
"event_action": "trigger",
"payload": {
"summary": "High memory usage: ${memory.stdout}%",
"severity": "warning"
}
}
- name: alert_app_down
type: http
condition: health.status_code != 200
config:
method: POST
url: ${PAGERDUTY_WEBHOOK_URL}
body: |
{
"routing_key": "${PAGERDUTY_KEY}",
"event_action": "trigger",
"payload": {
"summary": "Application health check failed",
"severity": "critical"
}
}
Data Quality Monitoring
Monitor data quality in your warehouse and alert on anomalies.
Workflow Configuration
name: data-quality-check
description: Monitor data quality and alert on issues
trigger:
type: schedule
cron: "0 */4 * * *" # Every 4 hours
steps:
- name: check_null_values
type: cli
config:
command: |
psql ${DATABASE_URL} -t -c "
SELECT COUNT(*) FROM users WHERE email IS NULL
"
output: null_check
- name: check_duplicate_records
type: cli
config:
command: |
psql ${DATABASE_URL} -t -c "
SELECT COUNT(*) FROM (
SELECT email, COUNT(*) as cnt
FROM users
GROUP BY email
HAVING COUNT(*) > 1
) duplicates
"
output: duplicate_check
- name: check_data_freshness
type: cli
config:
command: |
psql ${DATABASE_URL} -t -c "
SELECT EXTRACT(EPOCH FROM (NOW() - MAX(created_at)))/3600 as hours_since_last_record
FROM orders
"
output: freshness_check
- name: alert_on_nulls
type: http
condition: null_check.stdout > 10
config:
method: POST
url: ${SLACK_WEBHOOK_URL}
body: |
{
"text": "⚠️ Data quality issue: ${null_check.stdout} users with null emails"
}
- name: alert_on_duplicates
type: http
condition: duplicate_check.stdout > 0
config:
method: POST
url: ${SLACK_WEBHOOK_URL}
body: |
{
"text": "⚠️ Data quality issue: ${duplicate_check.stdout} duplicate user records"
}
- name: alert_on_stale_data
type: http
condition: freshness_check.stdout > 24
config:
method: POST
url: ${SLACK_WEBHOOK_URL}
body: |
{
"text": "⚠️ Data freshness issue: No new orders in ${freshness_check.stdout} hours"
}
Getting Started
-
Install LoopCLI:
npm install -g loopcli -
Initialize project:
loopcli project init -
Create your workflow:
loopcli loop create my-ops-workflow -
Test locally:
loopcli loop run my-ops-workflow -
Deploy for automated runs:
loopcli loop deploy my-ops-workflow --schedule "0 2 * * *" --activate