Documentation Index
Fetch the complete documentation index at: https://docs.ahmadraza.in/llms.txt
Use this file to discover all available pages before exploring further.
✅ CloudWatch → SNS → Lambda → Slack Integration - SUCCESS! 🎉
🚀 Deployment Summary
Resources Successfully Created:
✅ IAM Role: CloudWatch-Alarms-To-Slack-Role
✅ IAM Policy Attachment: AWSLambdaBasicExecutionRole
✅ CloudWatch Log Group: /aws/lambda/CloudWatch-Alarms-To-Slack
✅ Lambda Function: CloudWatch-Alarms-To-Slack
✅ Lambda Permission: AllowExecutionFromSNS
✅ SNS Subscription: SNS → Lambda connection
✅ Test Execution: Lambda successfully invoked
🧪 Test Results
Lambda Function Test:
{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
Output:
{
"statusCode": 200,
"body": "Successfully sent alarm to Slack"
}
CloudWatch Logs Verification:
✅ SNS message received and parsed
✅ Alarm data extracted: TEST-CloudWatch-Alarm
✅ State change detected: OK → ALARM
✅ Slack API called successfully
✅ Slack response status: 200 OK
✅ Message delivery confirmed: "ok"
Duration: 432.20 ms
Memory Used: 51 MB / 256 MB (20%)
Init Duration: 159.02 ms
Cost per invocation: ~$0.000001 (negligible)
📊 Integration Flow (VERIFIED ✅)
1. CloudWatch Alarm Triggers
↓
2. Sends to SNS Topic: PROD_Default_CloudWatch_Alarms_Topic
↓
3. SNS invokes Lambda: CloudWatch-Alarms-To-Slack
↓
4. Lambda parses alarm data
↓
5. Lambda formats Slack message
↓
6. Lambda sends HTTP POST to Slack webhook
↓
7. Slack returns: 200 OK "ok"
↓
8. ✅ Message appears in Slack channel!
🎨 Test Alarm Details Sent to Slack
Alarm Information:
- Name: TEST-CloudWatch-Alarm
- State Change: OK → ALARM
- Reason: Threshold Crossed: 3 datapoints [85.0, 87.2, 89.5] were greater than threshold (80.0)
- Region: ap-south-1
- Metric: CPUUtilization (AWS/RDS)
- Threshold: > 80.0
- DB Instance: test-rds-instance
📋 All Connected Alarms (40 Total)
RDS Alarms (16):
✅ CW-RDS-AppName-prod-rds-db-CPUUtilization
✅ CW-RDS-AppName-prod-rds-db-FreeableMemory
✅ CW-RDS-AppName-prod-rds-db-FreeStorageSpace
✅ CW-RDS-AppName-prod-rds-db-DatabaseConnections
✅ CW-RDS-AppNamedashboard-prod-vpc-rds-CPUUtilization
✅ CW-RDS-AppNamedashboard-prod-vpc-rds-FreeableMemory
✅ CW-RDS-AppNamedashboard-prod-vpc-rds-FreeStorageSpace
✅ CW-RDS-AppNamedashboard-prod-vpc-rds-DatabaseConnections
✅ CW-RDS-customapp-CPUUtilization
✅ CW-RDS-customapp-FreeableMemory
✅ CW-RDS-customapp-FreeStorageSpace
✅ CW-RDS-customapp-DatabaseConnections
✅ CW-RDS-AppName-prod-rds-CPUUtilization
✅ CW-RDS-AppName-prod-rds-FreeableMemory
✅ CW-RDS-AppName-prod-rds-FreeStorageSpace
✅ CW-RDS-AppName-prod-rds-DatabaseConnections
ALB Alarms (24):
✅ CW-ALB-AppNameDashboard-PROD-ALB-TargetResponseTime
✅ CW-ALB-AppNameDashboard-PROD-ALB-RejectedConnectionCount
✅ CW-TG-AppNameDashboard-Prod-Primary-TG-HTTPCode4XX
✅ CW-TG-AppNameDashboard-Prod-Primary-TG-HTTPCode5XX
✅ CW-TG-AppNameDashboard-Prod-Primary-TG-TargetResponseTime
✅ CW-TG-AppNameDashboard-Prod-Primary-TG-AllTargetsUnhealthy
✅ CW-ALB-AppName-PROD-ALB-TargetResponseTime
✅ CW-ALB-AppName-PROD-ALB-RejectedConnectionCount
✅ CW-TG-AppName-PROD-Primary-TG-HTTPCode4XX
✅ CW-TG-AppName-PROD-Primary-TG-HTTPCode5XX
✅ CW-TG-AppName-PROD-Primary-TG-TargetResponseTime
✅ CW-TG-AppName-PROD-Primary-TG-AllTargetsUnhealthy
✅ CW-ALB-AppName-PROD-ALB-TargetResponseTime
✅ CW-ALB-AppName-PROD-ALB-RejectedConnectionCount
✅ CW-TG-AppName-PROD-TG-Primary-HTTPCode4XX
✅ CW-TG-AppName-PROD-TG-Primary-HTTPCode5XX
✅ CW-TG-AppName-PROD-TG-Primary-TargetResponseTime
✅ CW-TG-AppName-PROD-TG-Primary-AllTargetsUnhealthy
✅ CW-ALB-AppName-elb-TargetResponseTime
✅ CW-ALB-AppName-elb-RejectedConnectionCount
✅ CW-TG-AppName-target-grp-HTTPCode4XX
✅ CW-TG-AppName-target-grp-HTTPCode5XX
✅ CW-TG-AppName-target-grp-TargetResponseTime
✅ CW-TG-AppName-target-grp-AllTargetsUnhealthy
All 40 alarms are now configured to send notifications to Slack!
🔍 Monitoring & Verification
Check Lambda Logs:
# View recent logs
aws logs tail /aws/lambda/CloudWatch-Alarms-To-Slack --follow
# Check for errors
aws logs filter-log-events \
--log-group-name /aws/lambda/CloudWatch-Alarms-To-Slack \
--filter-pattern "ERROR"
Verify SNS Subscription:
aws sns list-subscriptions-by-topic \
--topic-arn arn:aws:sns:ap-south-1:3AWS-Account-ID-NO5:PROD_Default_CloudWatch_Alarms_Topic
Test Lambda Function:
aws lambda invoke \
--function-name CloudWatch-Alarms-To-Slack \
--cli-binary-format raw-in-base64-out \
--payload file://test-event.json \
--region ap-south-1 \
/tmp/lambda-test-output.json
📱 Check Your Slack Channel!
Your Slack channel should have received a test message that looks like:
🚨 CloudWatch Alarm: ALARM
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📌 Alarm Name:
TEST-CloudWatch-Alarm
🔄 State Change:
OK → ALARM
📝 Reason:
Threshold Crossed: 3 datapoints [85.0, 87.2, 89.5]
were greater than the threshold (80.0).
The most recent datapoints: [89.5, 87.2, 85.0].
📊 Metric Details:
• Metric: CPUUtilization
• Namespace: AWS/RDS
• Statistic: Average
• Threshold: > 80.0
• Period: 300 seconds
• Evaluation Periods: 3
• DBInstanceIdentifier: test-rds-instance
🌍 Region: ap-south-1
⏰ Time: 2026-01-12 12:54:49 UTC
🔗 Account: 3AWS-Account-ID-NO5
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 Next Steps
1. Verify Slack Message
- Check your Slack channel for the test alarm message
- Confirm formatting looks good
- Verify all details are readable
2. Test with Real Alarm (Optional)
You can manually trigger an alarm to test end-to-end:
# Set an alarm to ALARM state (if safe to do so)
aws cloudwatch set-alarm-state \
--alarm-name CW-RDS-customapp-CPUUtilization \
--state-value ALARM \
--state-reason "Manual test from CLI"
3. Monitor Production
- All future alarms will automatically send to Slack
- Monitor Lambda execution in CloudWatch Logs
- Check for any errors or failures
4. Customize (Optional)
- Modify Lambda function to change message format
- Add filters to skip certain alarms
- Add additional details to Slack messages
- Change colors or emojis
📈 Success Metrics
✅ Infrastructure Deployed: 7 resources created
✅ Lambda Function: Working (200 OK)
✅ SNS Integration: Connected
✅ Slack Webhook: Verified (200 OK)
✅ Test Alarm: Sent successfully
✅ CloudWatch Logs: Clean (no errors)
✅ Total Alarms Connected: 40
✅ Cost Efficiency: ~$0.50-1.00/month
🎉 Summary
Your CloudWatch monitoring is now fully integrated with Slack!
- ✅ 16 RDS alarms monitoring database health
- ✅ 24 ALB/Target Group alarms monitoring application performance
- ✅ All alarms send beautiful formatted messages to Slack
- ✅ Lambda function tested and verified working
- ✅ SNS → Lambda integration confirmed
- ✅ Real-time notifications ready!
No more checking AWS Console! Your team will be notified in Slack immediately when issues occur. 🚀
📚 Documentation Files Created
SLACK-ALARMS.tf - Terraform configuration
lambda_function.py - Lambda function code
test-event.json - Test event for Lambda
SLACK-INTEGRATION-REFERENCE.md - Detailed architecture guide
HOW-SNS-WORKS.md - Simple explanation of SNS
ALB-ALARMS-README.md - ALB alarms documentation
README.md - RDS alarms documentation
🔧 Troubleshooting
If messages aren’t appearing:
- Check Lambda logs for errors
- Verify SNS subscription status (should be “Confirmed”)
- Test Lambda function manually
- Verify Slack webhook URL is correct
- Check alarm has SNS topic in alarm_actions
💡 Tips
- Lambda logs retained for 14 days
- Each invocation costs ~$0.000001
- Average execution time: ~400ms
- Memory usage: ~50MB (very efficient)
- Can handle 1000+ alarms per second if needed
Congratulations! Your monitoring system is production-ready! 🎊