Skip to content

Notification Channels

9n9s supports a wide range of notification channels to ensure alerts reach the right people through their preferred communication methods. This page covers all supported integrations, setup instructions, and best practices.

Notification channels are configured at the Organization level and can be used in alert rules across all projects. This allows you to centrally manage team communication preferences while routing specific alerts to appropriate channels.

  • Email & SMS: Direct notifications to individuals
  • Team Chat: Slack, Discord, Microsoft Teams
  • Incident Management: PagerDuty, Opsgenie, Splunk On-Call
  • Webhooks: Custom integrations and automation
  • Messaging Apps: Telegram, Pushover, ntfy
  • And Many More: 80+ services via Apprise backend

Email notifications provide detailed alert information with links back to the 9n9s dashboard.

Setup:

  1. Go to Organization Settings > Notification Channels
  2. Click Add Channel > Email
  3. Enter the recipient email address
  4. Configure message templates (optional)

Configuration Options:

email:
address: "[email protected]"
name: "Operations Team" # Display name
template: "detailed" # detailed, summary, minimal
include_logs: true # Include recent log snippets
max_frequency: "5m" # Rate limiting

Message Content:

  • Monitor name and current status
  • Status change information (e.g., Down → Up)
  • Timestamp and duration of issue
  • Direct link to monitor dashboard
  • Recent log snippets (if available)
  • Affected tags and metadata

SMS notifications provide concise alerts for critical issues requiring immediate attention.

Setup:

  1. Add SMS channel in notification settings
  2. Configure Twilio credentials:
    • Account SID
    • Auth Token
    • From phone number (Twilio verified)
  3. Add recipient phone numbers

Configuration:

sms:
provider: "twilio"
account_sid: "${TWILIO_ACCOUNT_SID}"
auth_token: "${TWILIO_AUTH_TOKEN}"
from_number: "+1234567890"
to_numbers:
- "+1987654321"
- "+1555123456"
message_template: "brief" # Keep messages short

Message Format:

9n9s Alert: [Monitor Name] is DOWN
Started: 2:15 PM
Duration: 5m
Link: https://9n9s.com/m/abc123

Slack integration supports rich message formatting, threading, and interactive elements.

Setup Methods:

Method 1: Slack App (Recommended)

  1. Install the 9n9s Slack app from the Slack App Directory
  2. Authorize the app for your workspace
  3. Configure channel permissions and preferences
  4. Test the integration

Method 2: Incoming Webhooks

  1. Create an Incoming Webhook in your Slack workspace
  2. Copy the webhook URL
  3. Add as webhook-type notification channel in 9n9s
  4. Configure message formatting

Advanced Configuration:

slack:
type: "app" # or "webhook"
workspace: "your-workspace"
channels:
- name: "#alerts"
severity: ["high", "critical"]
thread_replies: true
mention_on_critical: "@here"
- name: "#ops-team"
severity: ["medium", "high", "critical"]
include_graphs: true
features:
rich_formatting: true
interactive_buttons: true
status_updates: true
thread_management: true

Message Features:

  • Rich formatting with colors based on severity
  • Interactive buttons for acknowledgment and resolution
  • Threaded conversations for related alerts
  • Automatic status updates when issues resolve
  • Embedded charts and graphs (Slack app only)

Discord integration provides webhook-based notifications with embeds and role mentions.

Setup:

  1. Create a webhook in your Discord server
  2. Copy the webhook URL
  3. Add as notification channel in 9n9s
  4. Configure embed styling and mentions

Configuration:

discord:
webhook_url: "https://discord.com/api/webhooks/..."
username: "9n9s Monitor"
avatar_url: "https://9n9s.com/assets/logo.png"
embeds:
color_coding: true
include_thumbnail: true
add_fields: ["duration", "tags", "project"]
mentions:
critical_role: "@everyone"
high_role: "@ops"
user_mentions: ["@john", "@jane"]

Embed Structure:

  • Color-coded based on alert severity
  • Comprehensive field information
  • Direct action links
  • Timestamp and duration details

Teams integration uses webhook connectors with adaptive cards for rich presentation.

Setup:

  1. Add a Connector to your Teams channel
  2. Configure the Incoming Webhook
  3. Copy the webhook URL
  4. Add as notification channel in 9n9s

Configuration:

teams:
webhook_url: "https://outlook.office.com/webhook/..."
card_format: "adaptive" # adaptive or simple
include_actions: true
color_theme: "branded"
sections:
- "status_summary"
- "monitor_details"
- "recent_logs"
- "action_buttons"

PagerDuty integration creates incidents with proper escalation and on-call routing.

Setup:

  1. Create a service in PagerDuty
  2. Add 9n9s integration to the service
  3. Copy the Integration Key
  4. Configure in 9n9s notification channels

Configuration:

pagerduty:
integration_key: "${PAGERDUTY_INTEGRATION_KEY}"
routing_key: "your-routing-key" # For Events API v2
severity_mapping:
critical: "critical"
high: "error"
medium: "warning"
low: "info"
incident_settings:
auto_resolve: true
group_related: true
escalation_policy: "default"

Event Details:

  • Proper severity classification
  • Detailed event descriptions
  • Automatic resolution when monitors recover
  • Custom fields with monitor metadata
  • Links back to 9n9s dashboard

Opsgenie integration creates alerts with team routing and escalation.

Setup:

  1. Create an API integration in Opsgenie
  2. Copy the API key and endpoint
  3. Configure team and escalation settings
  4. Add as notification channel in 9n9s

Configuration:

opsgenie:
api_key: "${OPSGENIE_API_KEY}"
region: "us" # us or eu
default_team: "ops-team"
priority_mapping:
critical: "P1"
high: "P2"
medium: "P3"
low: "P4"
alert_settings:
auto_close: true
note_on_close: true
tags_from_monitor: true

VictorOps integration creates incidents with timeline entries and routing.

Setup:

  1. Configure REST integration in VictorOps
  2. Copy the API key and routing key
  3. Set up escalation policies
  4. Add as notification channel in 9n9s

Configuration:

victorops:
api_key: "${VICTOROPS_API_KEY}"
routing_key: "your-routing-key"
entity_id_format: "9n9s-{monitor_id}"
monitoring_tool: "9n9s"
message_type_mapping:
down: "CRITICAL"
degraded: "WARNING"
up: "RECOVERY"

Webhooks provide maximum flexibility for custom integrations and automation.

Setup:

  1. Create webhook endpoint in your system
  2. Configure authentication (if required)
  3. Add webhook URL as notification channel
  4. Customize payload format

Configuration:

webhook:
url: "https://your-system.com/9n9s-webhook"
method: "POST"
headers:
Authorization: "Bearer ${WEBHOOK_TOKEN}"
Content-Type: "application/json"
X-Source: "9n9s"
payload_format: "json" # json or form
retry_settings:
max_retries: 3
retry_delay: "5s"
timeout: "30s"

Payload Structure:

{
"alert_id": "alert_abc123",
"monitor": {
"id": "mon_xyz789",
"name": "API Health Check",
"type": "uptime",
"status": "DOWN",
"previous_status": "UP",
"url": "https://api.example.com/health",
"tags": {
"environment": "production",
"service": "api",
"criticality": "high"
}
},
"incident": {
"started_at": "2024-01-15T10:30:00Z",
"duration_seconds": 300,
"severity": "high",
"description": "API endpoint returning 500 errors"
},
"project": {
"id": "proj_123",
"name": "Production Services"
},
"organization": {
"id": "org_456",
"name": "Example Corp"
},
"links": {
"monitor": "https://9n9s.com/monitors/mon_xyz789",
"logs": "https://9n9s.com/monitors/mon_xyz789/logs",
"dashboard": "https://9n9s.com/projects/proj_123"
},
"metadata": {
"check_details": {
"response_time_ms": 5000,
"status_code": 500,
"error_message": "Internal Server Error"
},
"recent_logs": [
{
"timestamp": "2024-01-15T10:29:55Z",
"type": "ERROR",
"message": "Connection timeout to database"
}
]
}
}

Webhook Security:

  • HTTPS endpoints only
  • Request signing with HMAC-SHA256
  • IP allowlisting for additional security
  • Custom authentication headers

For advanced use cases, 9n9s supports custom notification plugins:

// Custom notification plugin example
const { NotificationPlugin } = require("@9n9s/plugin-sdk");
class CustomNotificationPlugin extends NotificationPlugin {
constructor(config) {
super(config);
this.apiKey = config.apiKey;
this.baseUrl = config.baseUrl;
}
async sendNotification(alert) {
const payload = this.formatPayload(alert);
const response = await fetch(`${this.baseUrl}/alerts`, {
method: "POST",
headers: {
Authorization: `Bearer ${this.apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify(payload),
});
if (!response.ok) {
throw new Error(`Notification failed: ${response.statusText}`);
}
return { success: true, messageId: response.headers.get("X-Message-ID") };
}
formatPayload(alert) {
return {
title: `${alert.monitor.name} is ${alert.monitor.status}`,
description: alert.incident.description,
severity: alert.incident.severity,
timestamp: alert.incident.started_at,
metadata: alert.metadata,
};
}
}
module.exports = CustomNotificationPlugin;

Send alerts to Telegram chats or channels.

Setup:

  1. Create a Telegram bot via @BotFather
  2. Get the bot token
  3. Add bot to your chat/channel
  4. Get chat ID
  5. Configure in 9n9s

Configuration:

telegram:
bot_token: "${TELEGRAM_BOT_TOKEN}"
chat_id: "-1001234567890"
parse_mode: "HTML" # HTML or Markdown
disable_notification: false
message_template: "detailed"

Send push notifications to mobile devices.

Setup:

  1. Create Pushover application
  2. Get application token
  3. Get user/group keys
  4. Configure in 9n9s

Configuration:

pushover:
app_token: "${PUSHOVER_APP_TOKEN}"
user_key: "${PUSHOVER_USER_KEY}"
priority: "normal" # lowest, low, normal, high, emergency
sound: "pushover"
device: "all" # or specific device names

Send notifications via the ntfy.sh service.

Setup:

  1. Choose or create ntfy topic
  2. Configure topic in 9n9s
  3. Subscribe to topic on devices

Configuration:

ntfy:
server: "https://ntfy.sh" # or self-hosted
topic: "9n9s-alerts-team"
username: "${NTFY_USERNAME}" # if authentication required
password: "${NTFY_PASSWORD}"
priority: "default"
tags: ["warning", "monitor"]

Configure when and how notifications are sent using alert rules:

alert_rules:
- name: "Critical Production Alerts"
conditions:
monitor_tags:
environment: "production"
criticality: "high"
status_changes:
- "UP → DOWN"
- "UP → DEGRADED"
duration_threshold: "1m" # Only alert after 1 minute
actions:
- channel: "pagerduty-oncall"
immediate: true
- channel: "slack-ops"
delay: "30s"
- channel: "email-team"
delay: "2m"
deduplication:
window: "30m"
group_by: ["project", "service"]
- name: "Recovery Notifications"
conditions:
status_changes:
- "DOWN → UP"
- "DEGRADED → UP"
actions:
- channel: "slack-ops"
template: "recovery"
- channel: "email-team"
template: "recovery"
settings:
auto_resolve_incidents: true

Separate by Severity:

  • Critical alerts → PagerDuty + immediate Slack
  • High alerts → Slack + email
  • Medium alerts → Email only
  • Low alerts → Dashboard only

Team-Specific Routing:

  • Database issues → DBA team channels
  • API issues → Backend team channels
  • Infrastructure → DevOps team channels

Keep SMS Concise:

  • Include only essential information
  • Use abbreviations where clear
  • Include short links for details

Rich Formatting for Chat:

  • Use colors and emojis for quick recognition
  • Include action buttons where supported
  • Thread related alerts together

Regular Testing:

Terminal window
# Test notification channels
9n9s-cli notifications test --channel slack-ops
9n9s-cli notifications test --channel pagerduty-oncall
# Verify webhook endpoints
9n9s-cli webhooks validate --url https://your-webhook.com/endpoint

Channel Health Monitoring:

  • Monitor notification delivery success rates
  • Set up alerts for failed notification deliveries
  • Regularly review and update contact information

Runbook Links: Include links to relevant runbooks and documentation in alert messages.

Context Enrichment: Add relevant metadata to help responders:

  • Recent deployments
  • Related system changes
  • Historical patterns
  • Escalation procedures

Automated Remediation: Use webhooks to trigger automated responses:

  • Restart services
  • Scale resources
  • Execute remediation scripts
  • Update status pages

Notification Not Received:

  1. Check channel configuration and credentials
  2. Verify alert rule conditions are met
  3. Check notification delivery logs
  4. Test channel configuration

Rate Limiting:

  1. Configure appropriate frequency limits
  2. Use alert grouping and deduplication
  3. Implement escalation delays

Format Issues:

  1. Validate webhook payload structure
  2. Check character limits for SMS/chat
  3. Test message templates

9n9s provides delivery monitoring for all notification channels:

  • Success/failure rates per channel
  • Average delivery time
  • Failed delivery reasons
  • Channel health dashboards

Use these metrics to optimize your notification strategy and ensure reliable alert delivery.

Automatically group related alerts to reduce noise:

grouping_rules:
- name: "Service Outage Grouping"
group_by: ["service", "environment"]
group_window: "5m"
max_group_size: 10
template: "grouped_alert"

Create sophisticated escalation logic:

escalation_chain:
- level: 1
delay: "0m"
channels: ["slack-ops"]
- level: 2
delay: "5m"
channels: ["pagerduty-primary"]
conditions:
- not_acknowledged: true
- level: 3
delay: "15m"
channels: ["pagerduty-manager", "sms-emergency"]
conditions:
- not_resolved: true
- severity: "critical"

Suppress alerts during planned maintenance:

maintenance_windows:
- name: "Weekly Deployment"
schedule: "0 2 * * SUN"
duration: "2h"
affected_tags:
environment: "production"
suppress_all: false
emergency_override: true

Effective notification configuration is crucial for maintaining operational awareness while avoiding alert fatigue. By thoughtfully configuring channels, rules, and escalation procedures, teams can ensure they’re informed of issues quickly while maintaining focus on critical problems.