Accord Capabilities
Naftiko 0.5 capability definitions for Accord - 100 capabilities showing integration workflows and service orchestrations.
Extracts OpenAPI specs from GitHub, validates and lints, publishes to developer portal, updates Confluence, and notifies API team.
naftiko: "0.5"
info:
label: "API Documentation Sync Pipeline"
description: "Extracts OpenAPI specs from GitHub, validates and lints, publishes to developer portal, updates Confluence, and notifies API team."
tags:
- api-management
- github
- confluence
- slack
capability:
exposes:
- type: mcp
namespace: api-management
port: 8080
tools:
- name: api_documentation_sync_pipeline
description: "Orchestrate api documentation sync pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-github
type: call
call: "github.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-confluence
type: call
call: "confluence.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-servicenow
type: call
call: "servicenow.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: github
baseUri: "https://api.github.com"
authentication:
type: bearer
token: "$secrets.github_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: github-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
Monitors API traffic patterns in Datadog, adjusts rate limits dynamically, updates Grafana dashboards, logs changes in ServiceNow, and notifies the API team.
naftiko: "0.5"
info:
label: "API Gateway Traffic Management"
description: "Monitors API traffic patterns in Datadog, adjusts rate limits dynamically, updates Grafana dashboards, logs changes in ServiceNow, and notifies the API team."
tags:
- api-management
- datadog
- grafana
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: api-management
port: 8080
tools:
- name: api_gateway_traffic_management
description: "Orchestrate api gateway traffic management workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-datadog
type: call
call: "datadog.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-grafana
type: call
call: "grafana.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-servicenow
type: call
call: "servicenow.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: datadog
baseUri: "https://api.datadoghq.com/api/v1"
authentication:
type: apiKey
key: "$secrets.datadog_api_key"
header: "DD-API-KEY"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: datadog-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Collects merged PRs from GitHub, categorizes by type, generates changelog in Confluence, updates Slack channel, and tags the release.
naftiko: "0.5"
info:
label: "Automated Changelog Generator"
description: "Collects merged PRs from GitHub, categorizes by type, generates changelog in Confluence, updates Slack channel, and tags the release."
tags:
- devops
- github
- confluence
- slack
capability:
exposes:
- type: mcp
namespace: devops
port: 8080
tools:
- name: automated_changelog_generator
description: "Orchestrate automated changelog generator workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-github
type: call
call: "github.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-confluence
type: call
call: "confluence.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-servicenow
type: call
call: "servicenow.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: github
baseUri: "https://api.github.com"
authentication:
type: bearer
token: "$secrets.github_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: github-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
Lists unused Docker images in Artifact Registry older than a retention threshold, deletes them, and logs the cleanup summary to Cloud Logging.
naftiko: "0.5"
info:
label: "Automated Docker Cleanup on GKE"
description: "Lists unused Docker images in Artifact Registry older than a retention threshold, deletes them, and logs the cleanup summary to Cloud Logging."
tags:
- automation
- docker
- gcp
- google-cloud-platform
- cost-optimization
capability:
exposes:
- type: mcp
namespace: image-cleanup
port: 8080
tools:
- name: cleanup-old-images
description: "Given a GCP project, Artifact Registry repo, and retention days, delete images older than the threshold and log the cleanup."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: repository
in: body
type: string
description: "The Artifact Registry repository name."
- name: retention_days
in: body
type: integer
description: "The number of days to retain images."
steps:
- name: list-old-images
type: call
call: "artifact-registry.list-images"
with:
project_id: "{{project_id}}"
repository: "{{repository}}"
- name: delete-images
type: call
call: "artifact-registry.delete-image"
with:
project_id: "{{project_id}}"
repository: "{{repository}}"
- name: log-cleanup
type: call
call: "cloud-logging.write-entry"
with:
project_id: "{{project_id}}"
log_name: "image-cleanup"
message: "Cleaned up images older than {{retention_days}} days from {{repository}}."
consumes:
- type: http
namespace: artifact-registry
baseUri: "https://artifactregistry.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: docker-images
path: "/projects/{{project_id}}/locations/us/repositories/{{repository}}/dockerImages"
inputParameters:
- name: project_id
in: path
- name: repository
in: path
operations:
- name: list-images
method: GET
- name: delete-image
method: DELETE
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:write"
operations:
- name: write-entry
method: POST
Runs data quality checks on BigQuery datasets, logs results in Grafana, creates alerts for violations, publishes scorecard to Confluence.
naftiko: "0.5"
info:
label: "BigQuery Data Quality Pipeline"
description: "Runs data quality checks on BigQuery datasets, logs results in Grafana, creates alerts for violations, publishes scorecard to Confluence."
tags:
- data-quality
- bigquery
- grafana
- confluence
- slack
capability:
exposes:
- type: mcp
namespace: data-quality
port: 8080
tools:
- name: bigquery_data_quality_pipeline
description: "Orchestrate bigquery data quality pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-bigquery
type: call
call: "bigquery.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-grafana
type: call
call: "grafana.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-confluence
type: call
call: "confluence.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: bigquery
baseUri: "https://bigquery.googleapis.com/bigquery/v2"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: bigquery-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Executes a BigQuery SQL query, extracts results, and uploads a formatted Excel report to Google Cloud Storage for business stakeholder consumption.
naftiko: "0.5"
info:
label: "BigQuery Dataset to Excel Report"
description: "Executes a BigQuery SQL query, extracts results, and uploads a formatted Excel report to Google Cloud Storage for business stakeholder consumption."
tags:
- data
- gcp
- google-cloud-platform
- bigquery
- excel
- reporting
capability:
exposes:
- type: mcp
namespace: data-reporting
port: 8080
tools:
- name: query-to-excel
description: "Given a GCP project, BigQuery SQL query, and GCS bucket, run the query and upload results as an Excel file."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: sql_query
in: body
type: string
description: "The BigQuery SQL query to execute."
- name: bucket_name
in: body
type: string
description: "The GCS bucket for the Excel output."
- name: report_name
in: body
type: string
description: "The desired file name for the Excel report."
steps:
- name: run-query
type: call
call: "bigquery.run-query"
with:
project_id: "{{project_id}}"
sql_query: "{{sql_query}}"
- name: upload-excel
type: call
call: "gcs.upload-object"
with:
bucket_name: "{{bucket_name}}"
file_name: "{{report_name}}.xlsx"
consumes:
- type: http
namespace: bigquery
baseUri: "https://bigquery.googleapis.com/bigquery/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: queries
path: "/projects/{{project_id}}/queries"
inputParameters:
- name: project_id
in: path
operations:
- name: run-query
method: POST
- type: http
namespace: gcs
baseUri: "https://storage.googleapis.com/upload/storage/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: objects
path: "/b/{{bucket_name}}/o?uploadType=media&name={{file_name}}"
inputParameters:
- name: bucket_name
in: path
- name: file_name
in: query
operations:
- name: upload-object
method: POST
Analyzes GitHub Actions run times, identifies bottlenecks in Snowflake, creates optimization tasks in Jira, and notifies the platform team.
naftiko: "0.5"
info:
label: "CI Pipeline Optimization Analyzer"
description: "Analyzes GitHub Actions run times, identifies bottlenecks in Snowflake, creates optimization tasks in Jira, and notifies the platform team."
tags:
- devops
- github
- snowflake
- jira
- slack
capability:
exposes:
- type: mcp
namespace: devops
port: 8080
tools:
- name: ci_pipeline_optimization_analyzer
description: "Orchestrate ci pipeline optimization analyzer workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-github
type: call
call: "github.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-snowflake
type: call
call: "snowflake.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-jira
type: call
call: "jira.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: github
baseUri: "https://api.github.com"
authentication:
type: bearer
token: "$secrets.github_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: github-op
method: POST
- type: http
namespace: snowflake
baseUri: "https://accord.snowflakecomputing.com/api/v2"
authentication:
type: bearer
token: "$secrets.snowflake_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: snowflake-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Monitors GCP billing alerts, forecasts overruns in BigQuery, creates Jira tickets for optimization, updates Grafana, and notifies finance.
naftiko: "0.5"
info:
label: "Cloud Budget Alert Pipeline"
description: "Monitors GCP billing alerts, forecasts overruns in BigQuery, creates Jira tickets for optimization, updates Grafana, and notifies finance."
tags:
- finops
- bigquery
- jira
- grafana
- slack
capability:
exposes:
- type: mcp
namespace: finops
port: 8080
tools:
- name: cloud_budget_alert_pipeline
description: "Orchestrate cloud budget alert pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-bigquery
type: call
call: "bigquery.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-jira
type: call
call: "jira.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-grafana
type: call
call: "grafana.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: bigquery
baseUri: "https://bigquery.googleapis.com/bigquery/v2"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: bigquery-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Scans GCP resources for compliance, checks Kubernetes RBAC, validates network policies, creates findings in Jira, and publishes report to Confluence.
naftiko: "0.5"
info:
label: "Cloud Compliance Posture Check"
description: "Scans GCP resources for compliance, checks Kubernetes RBAC, validates network policies, creates findings in Jira, and publishes report to Confluence."
tags:
- compliance
- gcp
- kubernetes
- jira
- confluence
capability:
exposes:
- type: mcp
namespace: compliance
port: 8080
tools:
- name: cloud_compliance_posture_check
description: "Orchestrate cloud compliance posture check workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-gcp
type: call
call: "gcp.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-kubernetes
type: call
call: "k8s.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-jira
type: call
call: "jira.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-confluence
type: call
call: "confluence.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: gcp
baseUri: "https://compute.googleapis.com/compute/v1/projects/accord"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: gcp-op
method: POST
- type: http
namespace: k8s
baseUri: "https://accord-k8s.com/api/v1"
authentication:
type: bearer
token: "$secrets.k8s_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: kubernetes-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
Validates Cloud SQL backups, tests restore integrity, logs results in ServiceNow, updates Grafana monitoring, and alerts DBA team.
naftiko: "0.5"
info:
label: "Cloud Database Backup Verifier"
description: "Validates Cloud SQL backups, tests restore integrity, logs results in ServiceNow, updates Grafana monitoring, and alerts DBA team."
tags:
- infrastructure
- gcp
- servicenow
- grafana
- slack
capability:
exposes:
- type: mcp
namespace: infrastructure
port: 8080
tools:
- name: cloud_database_backup_verifier
description: "Orchestrate cloud database backup verifier workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-gcp
type: call
call: "gcp.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-servicenow
type: call
call: "servicenow.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-grafana
type: call
call: "grafana.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: gcp
baseUri: "https://compute.googleapis.com/compute/v1/projects/accord"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: gcp-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Validates VPC configurations in GCP, checks firewall rules, monitors latency in Prometheus, logs findings in ServiceNow, and alerts network team.
naftiko: "0.5"
info:
label: "Cloud Networking Health Check"
description: "Validates VPC configurations in GCP, checks firewall rules, monitors latency in Prometheus, logs findings in ServiceNow, and alerts network team."
tags:
- networking
- gcp
- prometheus
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: networking
port: 8080
tools:
- name: cloud_networking_health_check
description: "Orchestrate cloud networking health check workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-gcp
type: call
call: "gcp.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-prometheus
type: call
call: "prometheus.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-servicenow
type: call
call: "servicenow.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: gcp
baseUri: "https://compute.googleapis.com/compute/v1/projects/accord"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: gcp-op
method: POST
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Scans GCP resources for missing tags, applies governance tags, updates CMDB in ServiceNow, reports to Grafana, and notifies cloud team.
naftiko: "0.5"
info:
label: "Cloud Resource Tagging Pipeline"
description: "Scans GCP resources for missing tags, applies governance tags, updates CMDB in ServiceNow, reports to Grafana, and notifies cloud team."
tags:
- governance
- gcp
- servicenow
- grafana
- slack
capability:
exposes:
- type: mcp
namespace: governance
port: 8080
tools:
- name: cloud_resource_tagging_pipeline
description: "Orchestrate cloud resource tagging pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-gcp
type: call
call: "gcp.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-servicenow
type: call
call: "servicenow.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-grafana
type: call
call: "grafana.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: gcp
baseUri: "https://compute.googleapis.com/compute/v1/projects/accord"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: gcp-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Detects a Cloud Run scaling event via Cloud Logging, records the instance count change, and publishes the event to Pub/Sub for downstream analytics.
naftiko: "0.5"
info:
label: "Cloud Run Auto-Scaling Event Logger"
description: "Detects a Cloud Run scaling event via Cloud Logging, records the instance count change, and publishes the event to Pub/Sub for downstream analytics."
tags:
- automation
- gcp
- google-cloud-platform
- event-driven
- cloud-run
- observability
capability:
exposes:
- type: mcp
namespace: scaling-events
port: 8080
tools:
- name: log-scaling-event
description: "Given a GCP project and Cloud Run service, query scaling logs and publish the event to Pub/Sub."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: service_name
in: body
type: string
description: "The Cloud Run service name."
steps:
- name: query-scaling-logs
type: call
call: "cloud-logging.list-entries"
with:
project_id: "{{project_id}}"
filter_expression: "resource.type=\"cloud_run_revision\" AND resource.labels.service_name=\"{{service_name}}\" AND textPayload:\"autoscaling\""
- name: publish-event
type: call
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "scaling-events"
message_data: "Service: {{service_name}}, Entries: {{query-scaling-logs.entries_count}}"
consumes:
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:list"
operations:
- name: list-entries
method: POST
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Deploys a new Cloud Run service, maps a custom domain via Cloud DNS, and creates a Cloud Monitoring uptime check to verify the endpoint is healthy.
naftiko: "0.5"
info:
label: "Cloud Run Service with Custom Domain Setup"
description: "Deploys a new Cloud Run service, maps a custom domain via Cloud DNS, and creates a Cloud Monitoring uptime check to verify the endpoint is healthy."
tags:
- automation
- gcp
- google-cloud-platform
- cloud-run
- dns
- observability
capability:
exposes:
- type: mcp
namespace: service-provisioning
port: 8080
tools:
- name: provision-service-with-domain
description: "Given a GCP project, Cloud Run service details, and custom domain, deploy the service, configure DNS, and set up monitoring."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: region
in: body
type: string
description: "The GCP region for Cloud Run."
- name: service_name
in: body
type: string
description: "The Cloud Run service name."
- name: image_uri
in: body
type: string
description: "The Docker image URI to deploy."
- name: managed_zone
in: body
type: string
description: "The Cloud DNS managed zone."
- name: domain_name
in: body
type: string
description: "The custom domain FQDN."
steps:
- name: deploy-service
type: call
call: "cloud-run.create-service"
with:
project_id: "{{project_id}}"
region: "{{region}}"
service_name: "{{service_name}}"
image_uri: "{{image_uri}}"
- name: map-domain
type: call
call: "cloud-dns.upsert-record"
with:
project_id: "{{project_id}}"
managed_zone: "{{managed_zone}}"
record_name: "{{domain_name}}"
target_ip: "{{deploy-service.url}}"
- name: setup-monitoring
type: call
call: "monitoring.create-uptime-check"
with:
project_id: "{{project_id}}"
host: "{{domain_name}}"
consumes:
- type: http
namespace: cloud-run
baseUri: "https://run.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: services
path: "/projects/{{project_id}}/locations/{{region}}/services"
inputParameters:
- name: project_id
in: path
- name: region
in: path
operations:
- name: create-service
method: POST
- type: http
namespace: cloud-dns
baseUri: "https://dns.googleapis.com/dns/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: resource-record-sets
path: "/projects/{{project_id}}/managedZones/{{managed_zone}}/rrsets"
inputParameters:
- name: project_id
in: path
- name: managed_zone
in: path
operations:
- name: upsert-record
method: POST
- type: http
namespace: monitoring
baseUri: "https://monitoring.googleapis.com/v3"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: uptime-check-configs
path: "/projects/{{project_id}}/uptimeCheckConfigs"
inputParameters:
- name: project_id
in: path
operations:
- name: create-uptime-check
method: POST
Retrieves Confluence page content for Accord knowledge base.
naftiko: "0.5"
info:
label: "Confluence Page Retrieval"
description: "Retrieves Confluence page content for Accord knowledge base."
tags:
- collaboration
- confluence
- documentation
capability:
exposes:
- type: mcp
namespace: knowledge
port: 8080
tools:
- name: get-page
description: "Get page at Accord."
inputParameters:
- name: page_id
in: body
type: string
description: "The page_id to look up."
call: "confluence.get-page_id"
with:
page_id: "{{page_id}}"
consumes:
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence_page_retrieval
method: GET
Builds a Docker image via GCP Cloud Build, pushes it to Artifact Registry, and deploys the new revision to Cloud Run, returning the live service URL.
naftiko: "0.5"
info:
label: "Container Deployment Pipeline"
description: "Builds a Docker image via GCP Cloud Build, pushes it to Artifact Registry, and deploys the new revision to Cloud Run, returning the live service URL."
tags:
- automation
- docker
- gcp
- google-cloud-platform
- cloud-run
- deployment
capability:
exposes:
- type: mcp
namespace: deploy-pipeline
port: 8080
tools:
- name: deploy-container
description: "Given a GCP project, source repo, and Cloud Run service name, build the Docker image, push to Artifact Registry, and deploy a new revision."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: repo_name
in: body
type: string
description: "The source repository name in Cloud Source Repos."
- name: service_name
in: body
type: string
description: "The Cloud Run service to deploy to."
- name: region
in: body
type: string
description: "The GCP region for the Cloud Run service."
steps:
- name: trigger-build
type: call
call: "cloud-build.create-build"
with:
project_id: "{{project_id}}"
repo_name: "{{repo_name}}"
- name: deploy-revision
type: call
call: "cloud-run.deploy-service"
with:
project_id: "{{project_id}}"
region: "{{region}}"
service_name: "{{service_name}}"
image_uri: "{{trigger-build.image_uri}}"
consumes:
- type: http
namespace: cloud-build
baseUri: "https://cloudbuild.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: builds
path: "/projects/{{project_id}}/builds"
inputParameters:
- name: project_id
in: path
operations:
- name: create-build
method: POST
- type: http
namespace: cloud-run
baseUri: "https://run.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: services
path: "/projects/{{project_id}}/locations/{{region}}/services/{{service_name}}"
inputParameters:
- name: project_id
in: path
- name: region
in: path
- name: service_name
in: path
operations:
- name: deploy-service
method: PATCH
Scans container image, validates tests pass in GitHub Actions, promotes to production registry, updates Kubernetes manifests, and notifies team.
naftiko: "0.5"
info:
label: "Container Image Promotion Pipeline"
description: "Scans container image, validates tests pass in GitHub Actions, promotes to production registry, updates Kubernetes manifests, and notifies team."
tags:
- devops
- github
- docker
- kubernetes
- slack
capability:
exposes:
- type: mcp
namespace: devops
port: 8080
tools:
- name: container_image_promotion_pipeline
description: "Orchestrate container image promotion pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-github
type: call
call: "github.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-docker
type: call
call: "docker.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-kubernetes
type: call
call: "k8s.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: github
baseUri: "https://api.github.com"
authentication:
type: bearer
token: "$secrets.github_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: github-op
method: POST
- type: http
namespace: docker
baseUri: "https://hub.docker.com/v2"
authentication:
type: bearer
token: "$secrets.docker_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: docker-op
method: POST
- type: http
namespace: k8s
baseUri: "https://accord-k8s.com/api/v1"
authentication:
type: bearer
token: "$secrets.k8s_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: kubernetes-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Scans Docker images for vulnerabilities, checks against admission policies, creates Jira issues for critical findings, and notifies security team.
naftiko: "0.5"
info:
label: "Container Security Scanning Pipeline"
description: "Scans Docker images for vulnerabilities, checks against admission policies, creates Jira issues for critical findings, and notifies security team."
tags:
- security
- docker
- jira
- slack
capability:
exposes:
- type: mcp
namespace: security
port: 8080
tools:
- name: container_security_scanning_pipeline
description: "Orchestrate container security scanning pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-docker
type: call
call: "docker.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-jira
type: call
call: "jira.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-servicenow
type: call
call: "servicenow.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: docker
baseUri: "https://hub.docker.com/v2"
authentication:
type: bearer
token: "$secrets.docker_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: docker-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
Maps team dependencies from Jira, identifies blockers, escalates in ServiceNow, updates Confluence board, and notifies program management.
naftiko: "0.5"
info:
label: "Cross-Team Dependency Tracker"
description: "Maps team dependencies from Jira, identifies blockers, escalates in ServiceNow, updates Confluence board, and notifies program management."
tags:
- project-management
- jira
- servicenow
- confluence
- slack
capability:
exposes:
- type: mcp
namespace: project-management
port: 8080
tools:
- name: cross_team_dependency_tracker
description: "Orchestrate cross-team dependency tracker workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-jira
type: call
call: "jira.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-servicenow
type: call
call: "servicenow.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-confluence
type: call
call: "confluence.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Retrieves Datadog monitor status for Accord infrastructure.
naftiko: "0.5"
info:
label: "Datadog Monitor Status"
description: "Retrieves Datadog monitor status for Accord infrastructure."
tags:
- monitoring
- datadog
- alerting
capability:
exposes:
- type: mcp
namespace: observability
port: 8080
tools:
- name: get-monitor
description: "Check monitor at Accord."
inputParameters:
- name: monitor_id
in: body
type: string
description: "The monitor_id to look up."
call: "datadog.get-monitor_id"
with:
monitor_id: "{{monitor_id}}"
consumes:
- type: http
namespace: datadog
baseUri: "https://api.datadoghq.com/api/v1"
authentication:
type: apiKey
key: "$secrets.datadog_api_key"
header: "DD-API-KEY"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: datadog_monitor_status
method: GET
Collects DORA metrics from GitHub, analyzes in Snowflake, publishes to Grafana, creates improvement tasks in Jira, and notifies engineering leads.
naftiko: "0.5"
info:
label: "Developer Experience Metrics Pipeline"
description: "Collects DORA metrics from GitHub, analyzes in Snowflake, publishes to Grafana, creates improvement tasks in Jira, and notifies engineering leads."
tags:
- devops
- github
- snowflake
- grafana
- jira
capability:
exposes:
- type: mcp
namespace: devops
port: 8080
tools:
- name: developer_experience_metrics_pipeline
description: "Orchestrate developer experience metrics pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-github
type: call
call: "github.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-snowflake
type: call
call: "snowflake.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-grafana
type: call
call: "grafana.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-jira
type: call
call: "jira.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: github
baseUri: "https://api.github.com"
authentication:
type: bearer
token: "$secrets.github_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: github-op
method: POST
- type: http
namespace: snowflake
baseUri: "https://accord.snowflakecomputing.com/api/v2"
authentication:
type: bearer
token: "$secrets.snowflake_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: snowflake-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
Copies a Docker image from a staging Artifact Registry repository to production, tags the image, and updates the Cloud Run service to use the promoted image.
naftiko: "0.5"
info:
label: "Docker Image Promotion Pipeline"
description: "Copies a Docker image from a staging Artifact Registry repository to production, tags the image, and updates the Cloud Run service to use the promoted image."
tags:
- automation
- docker
- gcp
- google-cloud-platform
- cloud-run
- promotion
capability:
exposes:
- type: mcp
namespace: image-promotion
port: 8080
tools:
- name: promote-image
description: "Given a GCP project, source and target Artifact Registry repos, image name, and Cloud Run service, promote the image and deploy."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: source_repo
in: body
type: string
description: "The staging Artifact Registry repository name."
- name: target_repo
in: body
type: string
description: "The production Artifact Registry repository name."
- name: image_name
in: body
type: string
description: "The Docker image name to promote."
- name: tag
in: body
type: string
description: "The image tag to promote."
- name: service_name
in: body
type: string
description: "The Cloud Run service name to update."
- name: region
in: body
type: string
description: "The GCP region for Cloud Run."
steps:
- name: tag-image
type: call
call: "artifact-registry.tag-image"
with:
project_id: "{{project_id}}"
source_repo: "{{source_repo}}"
target_repo: "{{target_repo}}"
image_name: "{{image_name}}"
tag: "{{tag}}"
- name: deploy-promoted
type: call
call: "cloud-run.deploy-service"
with:
project_id: "{{project_id}}"
region: "{{region}}"
service_name: "{{service_name}}"
image_uri: "{{tag-image.target_image_uri}}"
consumes:
- type: http
namespace: artifact-registry
baseUri: "https://artifactregistry.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: docker-images
path: "/projects/{{project_id}}/locations/us/repositories/{{source_repo}}/dockerImages/{{image_name}}/tags/{{tag}}"
inputParameters:
- name: project_id
in: path
- name: source_repo
in: path
- name: image_name
in: path
- name: tag
in: path
operations:
- name: tag-image
method: POST
- type: http
namespace: cloud-run
baseUri: "https://run.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: services
path: "/projects/{{project_id}}/locations/{{region}}/services/{{service_name}}"
inputParameters:
- name: project_id
in: path
- name: region
in: path
- name: service_name
in: path
operations:
- name: deploy-service
method: PATCH
Scans a Docker image in Artifact Registry for vulnerabilities and publishes critical findings to a Pub/Sub topic for security team notification.
naftiko: "0.5"
info:
label: "Docker Image Vulnerability Scan with Alert"
description: "Scans a Docker image in Artifact Registry for vulnerabilities and publishes critical findings to a Pub/Sub topic for security team notification."
tags:
- security
- docker
- gcp
- google-cloud-platform
- event-driven
capability:
exposes:
- type: mcp
namespace: container-security
port: 8080
tools:
- name: scan-docker-image
description: "Given a GCP project and Docker image path, scan for vulnerabilities and alert on critical findings via Pub/Sub."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: image_uri
in: body
type: string
description: "The full Artifact Registry image URI including tag."
steps:
- name: run-scan
type: call
call: "gcp-artifact.scan-image"
with:
project_id: "{{project_id}}"
image_uri: "{{image_uri}}"
- name: alert-findings
type: call
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "security-vulnerability-alerts"
message_data: "Image: {{image_uri}}, Critical: {{run-scan.critical_count}}, High: {{run-scan.high_count}}"
consumes:
- type: http
namespace: gcp-artifact
baseUri: "https://containeranalysis.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: occurrences
path: "/projects/{{project_id}}/occurrences?filter=resourceUrl%3D%22{{image_uri}}%22"
inputParameters:
- name: project_id
in: path
- name: image_uri
in: query
operations:
- name: scan-image
method: GET
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Searches Elasticsearch indexes for Accord.
naftiko: "0.5"
info:
label: "Elasticsearch Log Query"
description: "Searches Elasticsearch indexes for Accord."
tags:
- data
- elasticsearch
- search
capability:
exposes:
- type: mcp
namespace: search
port: 8080
tools:
- name: search-logs
description: "Search ES logs at Accord."
inputParameters:
- name: query
in: body
type: string
description: "The query to look up."
call: "elasticsearch.get-query"
with:
query: "{{query}}"
consumes:
- type: http
namespace: elasticsearch
baseUri: "https://accord-es.com:9200"
authentication:
type: bearer
token: "$secrets.elasticsearch_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: elasticsearch_log_query
method: GET
Identifies stale dev environments in GCP, validates no active usage, deletes resources, logs in ServiceNow, and reclaims budget.
naftiko: "0.5"
info:
label: "Environment Cleanup Orchestrator"
description: "Identifies stale dev environments in GCP, validates no active usage, deletes resources, logs in ServiceNow, and reclaims budget."
tags:
- finops
- gcp
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: finops
port: 8080
tools:
- name: environment_cleanup_orchestrator
description: "Orchestrate environment cleanup orchestrator workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-gcp
type: call
call: "gcp.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-servicenow
type: call
call: "servicenow.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: gcp
baseUri: "https://compute.googleapis.com/compute/v1/projects/accord"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: gcp-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Takes an Excel report file path and uploads it to a designated Google Cloud Storage bucket, returning the public URL for distribution.
naftiko: "0.5"
info:
label: "Excel Report to GCS Upload"
description: "Takes an Excel report file path and uploads it to a designated Google Cloud Storage bucket, returning the public URL for distribution."
tags:
- automation
- excel
- gcp
- google-cloud
- storage
capability:
exposes:
- type: mcp
namespace: report-distribution
port: 8080
tools:
- name: upload-excel-report
description: "Given a GCS bucket name and Excel file name, upload the report and return the storage URL."
inputParameters:
- name: bucket_name
in: body
type: string
description: "The destination GCS bucket name."
- name: file_name
in: body
type: string
description: "The Excel file name to upload."
- name: content_base64
in: body
type: string
description: "Base64-encoded content of the Excel file."
call: "gcs.upload-object"
with:
bucket_name: "{{bucket_name}}"
file_name: "{{file_name}}"
content_base64: "{{content_base64}}"
consumes:
- type: http
namespace: gcs
baseUri: "https://storage.googleapis.com/upload/storage/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: objects
path: "/b/{{bucket_name}}/o?uploadType=media&name={{file_name}}"
inputParameters:
- name: bucket_name
in: path
- name: file_name
in: query
operations:
- name: upload-object
method: POST
Enables feature flags incrementally, monitors error rates in Datadog, evaluates metrics in Grafana, creates rollback Jira ticket if needed, and notifies team.
naftiko: "0.5"
info:
label: "Feature Flag Rollout Pipeline"
description: "Enables feature flags incrementally, monitors error rates in Datadog, evaluates metrics in Grafana, creates rollback Jira ticket if needed, and notifies team."
tags:
- devops
- datadog
- grafana
- jira
- slack
capability:
exposes:
- type: mcp
namespace: devops
port: 8080
tools:
- name: feature_flag_rollout_pipeline
description: "Orchestrate feature flag rollout pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-datadog
type: call
call: "datadog.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-grafana
type: call
call: "grafana.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-jira
type: call
call: "jira.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: datadog
baseUri: "https://api.datadoghq.com/api/v1"
authentication:
type: apiKey
key: "$secrets.datadog_api_key"
header: "DD-API-KEY"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: datadog-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Deploys a Go backend to GKE, deploys a React frontend to GCS, updates DNS records in Cloud DNS, and creates a Cloud Monitoring uptime check for the new endpoint.
naftiko: "0.5"
info:
label: "Full-Stack Deployment with DNS and Monitoring"
description: "Deploys a Go backend to GKE, deploys a React frontend to GCS, updates DNS records in Cloud DNS, and creates a Cloud Monitoring uptime check for the new endpoint."
tags:
- automation
- go
- react
- gcp
- google-cloud-platform
- kubernetes
- deployment
- dns
- observability
capability:
exposes:
- type: mcp
namespace: fullstack-deploy
port: 8080
tools:
- name: deploy-full-stack
description: "Given a GCP project, Go backend repo, React frontend repo, and domain details, deploy the full stack with DNS and monitoring."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: backend_repo
in: body
type: string
description: "The Go backend source repository name."
- name: frontend_bucket
in: body
type: string
description: "The GCS bucket for React frontend assets."
- name: namespace
in: body
type: string
description: "The Kubernetes namespace for the backend."
- name: deployment_name
in: body
type: string
description: "The Kubernetes deployment name for the backend."
- name: managed_zone
in: body
type: string
description: "The Cloud DNS managed zone."
- name: domain_name
in: body
type: string
description: "The FQDN to update DNS for."
steps:
- name: build-backend
type: call
call: "cloud-build.create-build"
with:
project_id: "{{project_id}}"
repo_name: "{{backend_repo}}"
- name: deploy-backend
type: call
call: "k8s.patch-deployment"
with:
namespace: "{{namespace}}"
deployment_name: "{{deployment_name}}"
image_tag: "{{build-backend.image_uri}}"
- name: upload-frontend
type: call
call: "gcs.upload-object"
with:
bucket_name: "{{frontend_bucket}}"
file_name: "dist/index.html"
- name: update-dns
type: call
call: "cloud-dns.upsert-record"
with:
project_id: "{{project_id}}"
managed_zone: "{{managed_zone}}"
record_name: "{{domain_name}}"
- name: create-uptime-check
type: call
call: "monitoring.create-uptime-check"
with:
project_id: "{{project_id}}"
host: "{{domain_name}}"
consumes:
- type: http
namespace: cloud-build
baseUri: "https://cloudbuild.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: builds
path: "/projects/{{project_id}}/builds"
inputParameters:
- name: project_id
in: path
operations:
- name: create-build
method: POST
- type: http
namespace: k8s
baseUri: "https://kubernetes.default.svc"
authentication:
type: bearer
token: "$secrets.k8s_service_token"
resources:
- name: deployments
path: "/apis/apps/v1/namespaces/{{namespace}}/deployments/{{deployment_name}}"
inputParameters:
- name: namespace
in: path
- name: deployment_name
in: path
operations:
- name: patch-deployment
method: PATCH
- type: http
namespace: gcs
baseUri: "https://storage.googleapis.com/upload/storage/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: objects
path: "/b/{{bucket_name}}/o?uploadType=media&name={{file_name}}"
inputParameters:
- name: bucket_name
in: path
- name: file_name
in: query
operations:
- name: upload-object
method: POST
- type: http
namespace: cloud-dns
baseUri: "https://dns.googleapis.com/dns/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: resource-record-sets
path: "/projects/{{project_id}}/managedZones/{{managed_zone}}/rrsets"
inputParameters:
- name: project_id
in: path
- name: managed_zone
in: path
operations:
- name: upsert-record
method: POST
- type: http
namespace: monitoring
baseUri: "https://monitoring.googleapis.com/v3"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: uptime-check-configs
path: "/projects/{{project_id}}/uptimeCheckConfigs"
inputParameters:
- name: project_id
in: path
operations:
- name: create-uptime-check
method: POST
Checks the status of a BigQuery job, extracts bytes processed and slot usage, and writes a cost-tracking custom metric to Cloud Monitoring.
naftiko: "0.5"
info:
label: "GCP BigQuery Job Status with Cost Tracking"
description: "Checks the status of a BigQuery job, extracts bytes processed and slot usage, and writes a cost-tracking custom metric to Cloud Monitoring."
tags:
- data
- gcp
- google-cloud-platform
- bigquery
- cost-optimization
capability:
exposes:
- type: mcp
namespace: data-warehouse
port: 8080
tools:
- name: get-job-status
description: "Given a GCP project and BigQuery job ID, return the job state and write cost metrics to Cloud Monitoring."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: job_id
in: body
type: string
description: "The BigQuery job identifier."
steps:
- name: fetch-job
type: call
call: "bigquery.get-job"
with:
project_id: "{{project_id}}"
job_id: "{{job_id}}"
- name: write-cost-metric
type: call
call: "monitoring.create-timeseries"
with:
project_id: "{{project_id}}"
metric_type: "custom.googleapis.com/bigquery/bytes_processed"
metric_value: "{{fetch-job.total_bytes_processed}}"
consumes:
- type: http
namespace: bigquery
baseUri: "https://bigquery.googleapis.com/bigquery/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: jobs
path: "/projects/{{project_id}}/jobs/{{job_id}}"
inputParameters:
- name: project_id
in: path
- name: job_id
in: path
operations:
- name: get-job
method: GET
- type: http
namespace: monitoring
baseUri: "https://monitoring.googleapis.com/v3"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: timeseries
path: "/projects/{{project_id}}/timeSeries"
inputParameters:
- name: project_id
in: path
operations:
- name: create-timeseries
method: POST
Retrieves the current billing budget status for a GCP project, checks if spend exceeds the threshold, and publishes an alert event to Pub/Sub for finance team notification.
naftiko: "0.5"
info:
label: "GCP Billing Budget Alert Processor"
description: "Retrieves the current billing budget status for a GCP project, checks if spend exceeds the threshold, and publishes an alert event to Pub/Sub for finance team notification."
tags:
- cost-optimization
- gcp
- google-cloud-platform
- event-driven
- finance
capability:
exposes:
- type: mcp
namespace: billing-alerts
port: 8080
tools:
- name: check-budget-status
description: "Given a GCP billing account and budget ID, check spend against threshold and publish alert if exceeded."
inputParameters:
- name: billing_account
in: body
type: string
description: "The GCP billing account identifier."
- name: budget_id
in: body
type: string
description: "The budget identifier."
- name: project_id
in: body
type: string
description: "The GCP project for Pub/Sub."
steps:
- name: get-budget
type: call
call: "billing.get-budget"
with:
billing_account: "{{billing_account}}"
budget_id: "{{budget_id}}"
- name: publish-alert
type: call
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "billing-alerts"
message_data: "Budget {{budget_id}}: Current spend {{get-budget.current_spend}} of {{get-budget.budget_amount}} ({{get-budget.spend_percentage}}%)"
consumes:
- type: http
namespace: billing
baseUri: "https://billingbudgets.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: budgets
path: "/billingAccounts/{{billing_account}}/budgets/{{budget_id}}"
inputParameters:
- name: billing_account
in: path
- name: budget_id
in: path
operations:
- name: get-budget
method: GET
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Retrieves a Cloud Armor security policy and returns the rule list, default action, and adaptive protection status for WAF review.
naftiko: "0.5"
info:
label: "GCP Cloud Armor Security Policy Lookup"
description: "Retrieves a Cloud Armor security policy and returns the rule list, default action, and adaptive protection status for WAF review."
tags:
- security
- gcp
- google-cloud-platform
- networking
capability:
exposes:
- type: mcp
namespace: waf-security
port: 8080
tools:
- name: get-security-policy
description: "Given a GCP project and security policy name, return the Cloud Armor rules and configuration."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: policy_name
in: body
type: string
description: "The Cloud Armor security policy name."
call: "gcp-compute.get-security-policy"
with:
project_id: "{{project_id}}"
policy_name: "{{policy_name}}"
consumes:
- type: http
namespace: gcp-compute
baseUri: "https://compute.googleapis.com/compute/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: security-policies
path: "/projects/{{project_id}}/global/securityPolicies/{{policy_name}}"
inputParameters:
- name: project_id
in: path
- name: policy_name
in: path
operations:
- name: get-security-policy
method: GET
Creates or updates a DNS record in Cloud DNS and writes an audit log entry to Cloud Logging to track all DNS changes for security review.
naftiko: "0.5"
info:
label: "GCP Cloud DNS Record Update with Logging"
description: "Creates or updates a DNS record in Cloud DNS and writes an audit log entry to Cloud Logging to track all DNS changes for security review."
tags:
- cloud
- gcp
- google-cloud-platform
- dns
- networking
- security
capability:
exposes:
- type: mcp
namespace: dns-mgmt
port: 8080
tools:
- name: upsert-dns-record
description: "Given a GCP project, managed zone, record name, and target IP, update the DNS record and log the change."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: managed_zone
in: body
type: string
description: "The Cloud DNS managed zone name."
- name: record_name
in: body
type: string
description: "The DNS record name (FQDN)."
- name: target_ip
in: body
type: string
description: "The target IP address for the A record."
steps:
- name: update-record
type: call
call: "cloud-dns.upsert-record"
with:
project_id: "{{project_id}}"
managed_zone: "{{managed_zone}}"
record_name: "{{record_name}}"
target_ip: "{{target_ip}}"
- name: log-change
type: call
call: "cloud-logging.write-entry"
with:
project_id: "{{project_id}}"
log_name: "dns-changes"
message: "DNS update: {{record_name}} -> {{target_ip}} in zone {{managed_zone}}"
consumes:
- type: http
namespace: cloud-dns
baseUri: "https://dns.googleapis.com/dns/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: resource-record-sets
path: "/projects/{{project_id}}/managedZones/{{managed_zone}}/rrsets"
inputParameters:
- name: project_id
in: path
- name: managed_zone
in: path
operations:
- name: upsert-record
method: POST
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:write"
operations:
- name: write-entry
method: POST
Invokes a Google Cloud Function by name with a JSON payload and returns the function response.
naftiko: "0.5"
info:
label: "GCP Cloud Functions Invocation"
description: "Invokes a Google Cloud Function by name with a JSON payload and returns the function response."
tags:
- automation
- gcp
- google-cloud-platform
- serverless
capability:
exposes:
- type: mcp
namespace: cloud-functions
port: 8080
tools:
- name: invoke-function
description: "Given a GCP project, region, and function name, invoke the Cloud Function with the provided payload."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: region
in: body
type: string
description: "The GCP region of the function."
- name: function_name
in: body
type: string
description: "The Cloud Function name."
- name: payload
in: body
type: string
description: "The JSON payload to send to the function."
call: "cloud-functions.call-function"
with:
project_id: "{{project_id}}"
region: "{{region}}"
function_name: "{{function_name}}"
payload: "{{payload}}"
consumes:
- type: http
namespace: cloud-functions
baseUri: "https://cloudfunctions.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: functions
path: "/projects/{{project_id}}/locations/{{region}}/functions/{{function_name}}:call"
inputParameters:
- name: project_id
in: path
- name: region
in: path
- name: function_name
in: path
operations:
- name: call-function
method: POST
Executes a Cloud Logging query and exports the matching entries to a GCS bucket as a JSON file for offline analysis.
naftiko: "0.5"
info:
label: "GCP Cloud Logging Query with Export"
description: "Executes a Cloud Logging query and exports the matching entries to a GCS bucket as a JSON file for offline analysis."
tags:
- observability
- gcp
- google-cloud-platform
- logging
- storage
capability:
exposes:
- type: mcp
namespace: cloud-logging
port: 8080
tools:
- name: query-logs
description: "Given a GCP project, log filter, and GCS bucket, query logs and export results to storage."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: filter_expression
in: body
type: string
description: "The Cloud Logging filter expression."
- name: bucket_name
in: body
type: string
description: "The GCS bucket for log export."
steps:
- name: query-entries
type: call
call: "cloud-logging.list-entries"
with:
project_id: "{{project_id}}"
filter_expression: "{{filter_expression}}"
- name: export-to-gcs
type: call
call: "gcs.upload-object"
with:
bucket_name: "{{bucket_name}}"
file_name: "log-exports/{{project_id}}-query-results.json"
consumes:
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:list"
operations:
- name: list-entries
method: POST
- type: http
namespace: gcs
baseUri: "https://storage.googleapis.com/upload/storage/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: objects
path: "/b/{{bucket_name}}/o?uploadType=media&name={{file_name}}"
inputParameters:
- name: bucket_name
in: path
- name: file_name
in: query
operations:
- name: upload-object
method: POST
Retrieves a Cloud Monitoring alerting policy and records the audit event in Firestore for compliance tracking.
naftiko: "0.5"
info:
label: "GCP Cloud Monitoring Alert Policy Audit"
description: "Retrieves a Cloud Monitoring alerting policy and records the audit event in Firestore for compliance tracking."
tags:
- observability
- gcp
- google-cloud-platform
- monitoring
- compliance
capability:
exposes:
- type: mcp
namespace: cloud-monitoring
port: 8080
tools:
- name: get-alert-policy
description: "Given a GCP project and alert policy name, return the policy configuration and record the audit in Firestore."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: policy_name
in: body
type: string
description: "The full resource name of the alert policy."
steps:
- name: fetch-policy
type: call
call: "monitoring.get-alert-policy"
with:
project_id: "{{project_id}}"
policy_name: "{{policy_name}}"
- name: record-audit
type: call
call: "firestore.create-document"
with:
project_id: "{{project_id}}"
collection: "alert-policy-audits"
policy_name: "{{policy_name}}"
enabled: "{{fetch-policy.enabled}}"
conditions_count: "{{fetch-policy.conditions_count}}"
consumes:
- type: http
namespace: monitoring
baseUri: "https://monitoring.googleapis.com/v3"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: alert-policies
path: "/projects/{{project_id}}/alertPolicies/{{policy_name}}"
inputParameters:
- name: project_id
in: path
- name: policy_name
in: path
operations:
- name: get-alert-policy
method: GET
- type: http
namespace: firestore
baseUri: "https://firestore.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: documents
path: "/projects/{{project_id}}/databases/(default)/documents/{{collection}}"
inputParameters:
- name: project_id
in: path
- name: collection
in: path
operations:
- name: create-document
method: POST
Writes a custom metric time series data point to Cloud Monitoring for application-level KPI tracking.
naftiko: "0.5"
info:
label: "GCP Cloud Monitoring Custom Metric Writer"
description: "Writes a custom metric time series data point to Cloud Monitoring for application-level KPI tracking."
tags:
- observability
- gcp
- google-cloud-platform
- monitoring
- metrics
capability:
exposes:
- type: mcp
namespace: custom-metrics
port: 8080
tools:
- name: write-custom-metric
description: "Given a GCP project, metric type, and value, write a custom metric data point to Cloud Monitoring."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: metric_type
in: body
type: string
description: "The custom metric type path (e.g. custom.googleapis.com/my_metric)."
- name: metric_value
in: body
type: number
description: "The metric value to record."
call: "monitoring.create-timeseries"
with:
project_id: "{{project_id}}"
metric_type: "{{metric_type}}"
metric_value: "{{metric_value}}"
consumes:
- type: http
namespace: monitoring
baseUri: "https://monitoring.googleapis.com/v3"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: timeseries
path: "/projects/{{project_id}}/timeSeries"
inputParameters:
- name: project_id
in: path
operations:
- name: create-timeseries
method: POST
Retrieves the status of a Cloud Run service and queries Cloud Monitoring for request latency and error rate metrics on the latest revision.
naftiko: "0.5"
info:
label: "GCP Cloud Run Service Status with Metrics"
description: "Retrieves the status of a Cloud Run service and queries Cloud Monitoring for request latency and error rate metrics on the latest revision."
tags:
- cloud
- gcp
- google-cloud-platform
- serverless
- observability
capability:
exposes:
- type: mcp
namespace: cloud-run
port: 8080
tools:
- name: get-service-status
description: "Given a GCP project, region, and Cloud Run service name, return service status and recent performance metrics."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: region
in: body
type: string
description: "The GCP region of the Cloud Run service."
- name: service_name
in: body
type: string
description: "The Cloud Run service name."
steps:
- name: fetch-service
type: call
call: "cloud-run.get-service"
with:
project_id: "{{project_id}}"
region: "{{region}}"
service_name: "{{service_name}}"
- name: fetch-metrics
type: call
call: "monitoring.query-timeseries"
with:
project_id: "{{project_id}}"
filter: "resource.type=\"cloud_run_revision\" AND resource.labels.service_name=\"{{service_name}}\""
consumes:
- type: http
namespace: cloud-run
baseUri: "https://run.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: services
path: "/projects/{{project_id}}/locations/{{region}}/services/{{service_name}}"
inputParameters:
- name: project_id
in: path
- name: region
in: path
- name: service_name
in: path
operations:
- name: get-service
method: GET
- type: http
namespace: monitoring
baseUri: "https://monitoring.googleapis.com/v3"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: timeseries
path: "/projects/{{project_id}}/timeSeries"
inputParameters:
- name: project_id
in: path
operations:
- name: query-timeseries
method: GET
Retrieves the configuration and recent execution history of a Cloud Scheduler job, returning schedule, status, and last run time.
naftiko: "0.5"
info:
label: "GCP Cloud Scheduler Job Management"
description: "Retrieves the configuration and recent execution history of a Cloud Scheduler job, returning schedule, status, and last run time."
tags:
- automation
- gcp
- google-cloud-platform
- scheduling
capability:
exposes:
- type: mcp
namespace: cloud-scheduler
port: 8080
tools:
- name: get-scheduler-job
description: "Given a GCP project, region, and job name, return the Cloud Scheduler job configuration and status."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: region
in: body
type: string
description: "The GCP region of the scheduler job."
- name: job_name
in: body
type: string
description: "The Cloud Scheduler job name."
call: "scheduler.get-job"
with:
project_id: "{{project_id}}"
region: "{{region}}"
job_name: "{{job_name}}"
consumes:
- type: http
namespace: scheduler
baseUri: "https://cloudscheduler.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: jobs
path: "/projects/{{project_id}}/locations/{{region}}/jobs/{{job_name}}"
inputParameters:
- name: project_id
in: path
- name: region
in: path
- name: job_name
in: path
operations:
- name: get-job
method: GET
Lists all objects in a GCS bucket, calculates total storage usage, and writes an audit log entry to Cloud Logging for cost tracking.
naftiko: "0.5"
info:
label: "GCP Cloud Storage Bucket Audit"
description: "Lists all objects in a GCS bucket, calculates total storage usage, and writes an audit log entry to Cloud Logging for cost tracking."
tags:
- cloud
- gcp
- google-cloud
- storage
- observability
capability:
exposes:
- type: mcp
namespace: cloud-storage
port: 8080
tools:
- name: list-bucket-objects
description: "Given a GCS bucket name and optional prefix, list all objects and log the audit event."
inputParameters:
- name: bucket_name
in: body
type: string
description: "The Google Cloud Storage bucket name."
- name: prefix
in: body
type: string
description: "Optional prefix to filter objects."
- name: project_id
in: body
type: string
description: "The GCP project for logging."
steps:
- name: list-objects
type: call
call: "gcs.list-objects"
with:
bucket_name: "{{bucket_name}}"
prefix: "{{prefix}}"
- name: log-audit
type: call
call: "cloud-logging.write-entry"
with:
project_id: "{{project_id}}"
log_name: "storage-audits"
message: "Bucket audit: {{bucket_name}} prefix={{prefix}}, object count: {{list-objects.items_count}}"
consumes:
- type: http
namespace: gcs
baseUri: "https://storage.googleapis.com/storage/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: objects
path: "/b/{{bucket_name}}/o?prefix={{prefix}}"
inputParameters:
- name: bucket_name
in: path
- name: prefix
in: query
operations:
- name: list-objects
method: GET
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:write"
operations:
- name: write-entry
method: POST
Lists tasks in a Cloud Tasks queue and returns the count, oldest task age, and dispatch rate for queue health monitoring.
naftiko: "0.5"
info:
label: "GCP Cloud Tasks Queue Inspector"
description: "Lists tasks in a Cloud Tasks queue and returns the count, oldest task age, and dispatch rate for queue health monitoring."
tags:
- automation
- gcp
- google-cloud-platform
- observability
capability:
exposes:
- type: mcp
namespace: cloud-tasks
port: 8080
tools:
- name: inspect-queue
description: "Given a GCP project, region, and queue name, return task count, oldest task age, and queue configuration."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: region
in: body
type: string
description: "The GCP region of the queue."
- name: queue_name
in: body
type: string
description: "The Cloud Tasks queue name."
call: "cloud-tasks.get-queue"
with:
project_id: "{{project_id}}"
region: "{{region}}"
queue_name: "{{queue_name}}"
consumes:
- type: http
namespace: cloud-tasks
baseUri: "https://cloudtasks.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: queues
path: "/projects/{{project_id}}/locations/{{region}}/queues/{{queue_name}}"
inputParameters:
- name: project_id
in: path
- name: region
in: path
- name: queue_name
in: path
operations:
- name: get-queue
method: GET
Retrieves distributed trace spans from Cloud Trace for a specific trace ID, returning span names, durations, and service labels for latency debugging.
naftiko: "0.5"
info:
label: "GCP Cloud Trace Span Lookup"
description: "Retrieves distributed trace spans from Cloud Trace for a specific trace ID, returning span names, durations, and service labels for latency debugging."
tags:
- observability
- gcp
- google-cloud-platform
- tracing
capability:
exposes:
- type: mcp
namespace: distributed-tracing
port: 8080
tools:
- name: get-trace-spans
description: "Given a GCP project and trace ID, return all spans with their durations and service labels."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: trace_id
in: body
type: string
description: "The distributed trace identifier."
call: "cloud-trace.get-trace"
with:
project_id: "{{project_id}}"
trace_id: "{{trace_id}}"
consumes:
- type: http
namespace: cloud-trace
baseUri: "https://cloudtrace.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: traces
path: "/projects/{{project_id}}/traces/{{trace_id}}"
inputParameters:
- name: project_id
in: path
- name: trace_id
in: path
operations:
- name: get-trace
method: GET
Analyzes BigQuery usage patterns, identifies idle resources, creates Jira tickets for cleanup, updates Grafana cost dashboard, and notifies finance.
naftiko: "0.5"
info:
label: "GCP Cost Optimization Pipeline"
description: "Analyzes BigQuery usage patterns, identifies idle resources, creates Jira tickets for cleanup, updates Grafana cost dashboard, and notifies finance."
tags:
- finops
- bigquery
- jira
- grafana
- slack
capability:
exposes:
- type: mcp
namespace: finops
port: 8080
tools:
- name: gcp_cost_optimization_pipeline
description: "Orchestrate gcp cost optimization pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-bigquery
type: call
call: "bigquery.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-jira
type: call
call: "jira.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-grafana
type: call
call: "grafana.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: bigquery
baseUri: "https://bigquery.googleapis.com/bigquery/v2"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: bigquery-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Exports data from source via Cloud SQL, validates schema in BigQuery, runs integrity checks, updates Confluence docs, and notifies the DBA team.
naftiko: "0.5"
info:
label: "GCP Database Migration Pipeline"
description: "Exports data from source via Cloud SQL, validates schema in BigQuery, runs integrity checks, updates Confluence docs, and notifies the DBA team."
tags:
- data-engineering
- bigquery
- confluence
- slack
capability:
exposes:
- type: mcp
namespace: data-engineering
port: 8080
tools:
- name: gcp_database_migration_pipeline
description: "Orchestrate gcp database migration pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-bigquery
type: call
call: "bigquery.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-confluence
type: call
call: "confluence.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-servicenow
type: call
call: "servicenow.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: bigquery
baseUri: "https://bigquery.googleapis.com/bigquery/v2"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: bigquery-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
Fetches recent error groups from GCP Error Reporting, formats a summary, and uploads an Excel digest to Cloud Storage for stakeholder review.
naftiko: "0.5"
info:
label: "GCP Error Reporting to Excel Digest"
description: "Fetches recent error groups from GCP Error Reporting, formats a summary, and uploads an Excel digest to Cloud Storage for stakeholder review."
tags:
- observability
- gcp
- google-cloud-platform
- excel
- reporting
capability:
exposes:
- type: mcp
namespace: error-reporting
port: 8080
tools:
- name: generate-error-digest
description: "Given a GCP project ID and time window, fetch error groups from Error Reporting and upload an Excel summary to GCS."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: time_range
in: body
type: string
description: "The time range for error lookup (e.g. PERIOD_1_HOUR)."
- name: bucket_name
in: body
type: string
description: "The GCS bucket for the Excel digest."
steps:
- name: fetch-errors
type: call
call: "error-reporting.list-groups"
with:
project_id: "{{project_id}}"
time_range: "{{time_range}}"
- name: upload-digest
type: call
call: "gcs.upload-object"
with:
bucket_name: "{{bucket_name}}"
file_name: "error-digest-{{project_id}}.xlsx"
consumes:
- type: http
namespace: error-reporting
baseUri: "https://clouderrorreporting.googleapis.com/v1beta1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: group-stats
path: "/projects/{{project_id}}/groupStats?timeRange.period={{time_range}}"
inputParameters:
- name: project_id
in: path
- name: time_range
in: query
operations:
- name: list-groups
method: GET
- type: http
namespace: gcs
baseUri: "https://storage.googleapis.com/upload/storage/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: objects
path: "/b/{{bucket_name}}/o?uploadType=media&name={{file_name}}"
inputParameters:
- name: bucket_name
in: path
- name: file_name
in: query
operations:
- name: upload-object
method: POST
Retrieves a Firestore document by collection and document ID, returning the full document fields and metadata.
naftiko: "0.5"
info:
label: "GCP Firestore Document Lookup"
description: "Retrieves a Firestore document by collection and document ID, returning the full document fields and metadata."
tags:
- data
- gcp
- google-cloud-platform
- firestore
capability:
exposes:
- type: mcp
namespace: document-store
port: 8080
tools:
- name: get-document
description: "Given a GCP project, Firestore collection, and document ID, return the document fields and metadata."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: collection
in: body
type: string
description: "The Firestore collection name."
- name: document_id
in: body
type: string
description: "The Firestore document identifier."
call: "firestore.get-document"
with:
project_id: "{{project_id}}"
collection: "{{collection}}"
document_id: "{{document_id}}"
consumes:
- type: http
namespace: firestore
baseUri: "https://firestore.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: documents
path: "/projects/{{project_id}}/databases/(default)/documents/{{collection}}/{{document_id}}"
inputParameters:
- name: project_id
in: path
- name: collection
in: path
- name: document_id
in: path
operations:
- name: get-document
method: GET
Audits GCP IAM policies, identifies over-provisioned accounts, creates remediation Jira tickets, updates Confluence docs, and notifies security.
naftiko: "0.5"
info:
label: "GCP IAM Access Review Pipeline"
description: "Audits GCP IAM policies, identifies over-provisioned accounts, creates remediation Jira tickets, updates Confluence docs, and notifies security."
tags:
- security
- gcp
- jira
- confluence
- slack
capability:
exposes:
- type: mcp
namespace: security
port: 8080
tools:
- name: gcp_iam_access_review_pipeline
description: "Orchestrate gcp iam access review pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-gcp
type: call
call: "gcp.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-jira
type: call
call: "jira.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-confluence
type: call
call: "confluence.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: gcp
baseUri: "https://compute.googleapis.com/compute/v1/projects/accord"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: gcp-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Creates a new service account key in GCP IAM, disables the previous key, and logs the rotation event to Cloud Logging for audit compliance.
naftiko: "0.5"
info:
label: "GCP IAM Service Account Key Rotation"
description: "Creates a new service account key in GCP IAM, disables the previous key, and logs the rotation event to Cloud Logging for audit compliance."
tags:
- security
- gcp
- google-cloud-platform
- automation
- compliance
capability:
exposes:
- type: mcp
namespace: iam-rotation
port: 8080
tools:
- name: rotate-service-account-key
description: "Given a GCP project and service account email, create a new key, disable the old one, and log the event."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: service_account_email
in: body
type: string
description: "The service account email address."
- name: old_key_id
in: body
type: string
description: "The key ID of the current key to disable."
steps:
- name: create-new-key
type: call
call: "iam.create-key"
with:
project_id: "{{project_id}}"
service_account_email: "{{service_account_email}}"
- name: disable-old-key
type: call
call: "iam.disable-key"
with:
project_id: "{{project_id}}"
service_account_email: "{{service_account_email}}"
key_id: "{{old_key_id}}"
- name: log-rotation
type: call
call: "cloud-logging.write-entry"
with:
project_id: "{{project_id}}"
log_name: "iam-key-rotation"
message: "Rotated key for {{service_account_email}}. New key: {{create-new-key.key_id}}. Disabled key: {{old_key_id}}."
consumes:
- type: http
namespace: iam
baseUri: "https://iam.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: service-account-keys
path: "/projects/{{project_id}}/serviceAccounts/{{service_account_email}}/keys"
inputParameters:
- name: project_id
in: path
- name: service_account_email
in: path
operations:
- name: create-key
method: POST
- name: service-account-key
path: "/projects/{{project_id}}/serviceAccounts/{{service_account_email}}/keys/{{key_id}}"
inputParameters:
- name: project_id
in: path
- name: service_account_email
in: path
- name: key_id
in: path
operations:
- name: disable-key
method: POST
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:write"
operations:
- name: write-entry
method: POST
Monitors GCP resource utilization via Stackdriver, evaluates scaling policies, applies Terraform changes, validates in Prometheus, and notifies ops in Slack.
naftiko: "0.5"
info:
label: "GCP Infrastructure Scaling Pipeline"
description: "Monitors GCP resource utilization via Stackdriver, evaluates scaling policies, applies Terraform changes, validates in Prometheus, and notifies ops in Slack."
tags:
- infrastructure
- gcp
- prometheus
- slack
capability:
exposes:
- type: mcp
namespace: infrastructure
port: 8080
tools:
- name: gcp_infrastructure_scaling_pipeline
description: "Orchestrate gcp infrastructure scaling pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-gcp
type: call
call: "gcp.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-prometheus
type: call
call: "prometheus.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-servicenow
type: call
call: "servicenow.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: gcp
baseUri: "https://compute.googleapis.com/compute/v1/projects/accord"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: gcp-op
method: POST
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
Queries a Google Cloud Compute Engine instance by name, returns its current status and uptime metrics, and writes a health check log entry to Cloud Logging.
naftiko: "0.5"
info:
label: "GCP Instance Health Check with Logging"
description: "Queries a Google Cloud Compute Engine instance by name, returns its current status and uptime metrics, and writes a health check log entry to Cloud Logging."
tags:
- cloud
- gcp
- google-cloud-platform
- compute
- observability
capability:
exposes:
- type: mcp
namespace: cloud-compute
port: 8080
tools:
- name: get-instance-status
description: "Given a GCP project ID and instance name, return the instance status, zone, machine type, and creation timestamp, then log the health check."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: instance_name
in: body
type: string
description: "The Compute Engine instance name."
- name: zone
in: body
type: string
description: "The GCP zone where the instance resides."
steps:
- name: fetch-instance
type: call
call: "gcp-compute.get-instance"
with:
project_id: "{{project_id}}"
instance_name: "{{instance_name}}"
zone: "{{zone}}"
- name: log-health-check
type: call
call: "cloud-logging.write-entry"
with:
project_id: "{{project_id}}"
log_name: "instance-health-checks"
message: "Health check: {{instance_name}} in {{zone}} — status: {{fetch-instance.status}}"
consumes:
- type: http
namespace: gcp-compute
baseUri: "https://compute.googleapis.com/compute/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: instances
path: "/projects/{{project_id}}/zones/{{zone}}/instances/{{instance_name}}"
inputParameters:
- name: project_id
in: path
- name: zone
in: path
- name: instance_name
in: path
operations:
- name: get-instance
method: GET
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:write"
operations:
- name: write-entry
method: POST
Checks the health status of backends behind a GCP HTTP(S) load balancer and returns healthy/unhealthy instance counts per backend service.
naftiko: "0.5"
info:
label: "GCP Load Balancer Health Status"
description: "Checks the health status of backends behind a GCP HTTP(S) load balancer and returns healthy/unhealthy instance counts per backend service."
tags:
- observability
- gcp
- google-cloud-platform
- networking
- load-balancing
capability:
exposes:
- type: mcp
namespace: lb-health
port: 8080
tools:
- name: get-backend-health
description: "Given a GCP project and backend service name, return the health status of all backend instances."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: backend_service
in: body
type: string
description: "The backend service name."
call: "gcp-compute.get-backend-health"
with:
project_id: "{{project_id}}"
backend_service: "{{backend_service}}"
consumes:
- type: http
namespace: gcp-compute
baseUri: "https://compute.googleapis.com/compute/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: backend-services
path: "/projects/{{project_id}}/global/backendServices/{{backend_service}}/getHealth"
inputParameters:
- name: project_id
in: path
- name: backend_service
in: path
operations:
- name: get-backend-health
method: POST
Exports Cloud Logging data to BigQuery, runs anomaly detection queries, creates Jira tickets for findings, updates Grafana, and notifies ops.
naftiko: "0.5"
info:
label: "GCP Log Analysis Pipeline"
description: "Exports Cloud Logging data to BigQuery, runs anomaly detection queries, creates Jira tickets for findings, updates Grafana, and notifies ops."
tags:
- observability
- bigquery
- jira
- grafana
- slack
capability:
exposes:
- type: mcp
namespace: observability
port: 8080
tools:
- name: gcp_log_analysis_pipeline
description: "Orchestrate gcp log analysis pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-bigquery
type: call
call: "bigquery.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-jira
type: call
call: "jira.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-grafana
type: call
call: "grafana.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: bigquery
baseUri: "https://bigquery.googleapis.com/bigquery/v2"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: bigquery-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Publishes a message to a Google Cloud Pub/Sub topic with optional attributes for event-driven downstream processing.
naftiko: "0.5"
info:
label: "GCP Pub/Sub Topic Message Publisher"
description: "Publishes a message to a Google Cloud Pub/Sub topic with optional attributes for event-driven downstream processing."
tags:
- event-driven
- gcp
- google-cloud-platform
- pubsub
capability:
exposes:
- type: mcp
namespace: event-bus
port: 8080
tools:
- name: publish-message
description: "Given a GCP project and Pub/Sub topic, publish a message with optional attributes."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: topic_name
in: body
type: string
description: "The Pub/Sub topic name."
- name: message_data
in: body
type: string
description: "The base64-encoded message payload."
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "{{topic_name}}"
message_data: "{{message_data}}"
consumes:
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Accesses a secret version from GCP Secret Manager and returns the payload for runtime configuration injection.
naftiko: "0.5"
info:
label: "GCP Secret Manager Secret Retrieval"
description: "Accesses a secret version from GCP Secret Manager and returns the payload for runtime configuration injection."
tags:
- security
- gcp
- google-cloud-platform
- secrets
capability:
exposes:
- type: mcp
namespace: secret-mgmt
port: 8080
tools:
- name: get-secret-value
description: "Given a GCP project and secret name, retrieve the latest secret version payload."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: secret_name
in: body
type: string
description: "The Secret Manager secret name."
call: "secret-manager.access-secret"
with:
project_id: "{{project_id}}"
secret_name: "{{secret_name}}"
consumes:
- type: http
namespace: secret-manager
baseUri: "https://secretmanager.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: secrets
path: "/projects/{{project_id}}/secrets/{{secret_name}}/versions/latest:access"
inputParameters:
- name: project_id
in: path
- name: secret_name
in: path
operations:
- name: access-secret
method: GET
Lists firewall rules for a GCP VPC network and returns rule names, directions, allowed protocols, and source ranges for security review.
naftiko: "0.5"
info:
label: "GCP VPC Firewall Rule Audit"
description: "Lists firewall rules for a GCP VPC network and returns rule names, directions, allowed protocols, and source ranges for security review."
tags:
- security
- gcp
- google-cloud-platform
- networking
capability:
exposes:
- type: mcp
namespace: network-security
port: 8080
tools:
- name: list-firewall-rules
description: "Given a GCP project, list all VPC firewall rules with their configurations for security audit."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
call: "gcp-compute.list-firewalls"
with:
project_id: "{{project_id}}"
consumes:
- type: http
namespace: gcp-compute
baseUri: "https://compute.googleapis.com/compute/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: firewalls
path: "/projects/{{project_id}}/global/firewalls"
inputParameters:
- name: project_id
in: path
operations:
- name: list-firewalls
method: GET
Retrieves GitHub repository metadata for Accord.
naftiko: "0.5"
info:
label: "GitHub Repository Lookup"
description: "Retrieves GitHub repository metadata for Accord."
tags:
- devops
- github
- source-control
capability:
exposes:
- type: mcp
namespace: engineering
port: 8080
tools:
- name: get-repo
description: "Look up repo at Accord."
inputParameters:
- name: repo_name
in: body
type: string
description: "The repo_name to look up."
call: "github.get-repo_name"
with:
repo_name: "{{repo_name}}"
consumes:
- type: http
namespace: github
baseUri: "https://api.github.com"
authentication:
type: bearer
token: "$secrets.github_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: github_repository_lookup
method: GET
Compares deployed state against Git source of truth, identifies drift, creates Jira tickets, auto-syncs via ArgoCD, and notifies platform team.
naftiko: "0.5"
info:
label: "GitOps Drift Detection Pipeline"
description: "Compares deployed state against Git source of truth, identifies drift, creates Jira tickets, auto-syncs via ArgoCD, and notifies platform team."
tags:
- devops
- github
- jira
- slack
capability:
exposes:
- type: mcp
namespace: devops
port: 8080
tools:
- name: gitops_drift_detection_pipeline
description: "Orchestrate gitops drift detection pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-github
type: call
call: "github.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-jira
type: call
call: "jira.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-servicenow
type: call
call: "servicenow.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: github
baseUri: "https://api.github.com"
authentication:
type: bearer
token: "$secrets.github_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: github-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
Adjusts the node pool size of a GKE cluster, waits for the operation to complete, and logs the scaling event to Cloud Logging for capacity planning.
naftiko: "0.5"
info:
label: "GKE Cluster Node Pool Scaling"
description: "Adjusts the node pool size of a GKE cluster, waits for the operation to complete, and logs the scaling event to Cloud Logging for capacity planning."
tags:
- automation
- gcp
- google-cloud-platform
- kubernetes
- scaling
capability:
exposes:
- type: mcp
namespace: gke-scaling
port: 8080
tools:
- name: scale-node-pool
description: "Given a GCP project, GKE cluster, node pool, and target size, scale the pool and log the event."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: cluster_name
in: body
type: string
description: "The GKE cluster name."
- name: node_pool_name
in: body
type: string
description: "The node pool name to scale."
- name: zone
in: body
type: string
description: "The GCP zone of the cluster."
- name: target_size
in: body
type: integer
description: "The desired number of nodes."
steps:
- name: resize-pool
type: call
call: "gke.resize-node-pool"
with:
project_id: "{{project_id}}"
cluster_name: "{{cluster_name}}"
node_pool_name: "{{node_pool_name}}"
zone: "{{zone}}"
target_size: "{{target_size}}"
- name: log-scaling
type: call
call: "cloud-logging.write-entry"
with:
project_id: "{{project_id}}"
log_name: "gke-scaling"
message: "Scaled {{node_pool_name}} in {{cluster_name}} to {{target_size}} nodes."
consumes:
- type: http
namespace: gke
baseUri: "https://container.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: node-pools
path: "/projects/{{project_id}}/zones/{{zone}}/clusters/{{cluster_name}}/nodePools/{{node_pool_name}}/setSize"
inputParameters:
- name: project_id
in: path
- name: zone
in: path
- name: cluster_name
in: path
- name: node_pool_name
in: path
operations:
- name: resize-node-pool
method: POST
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:write"
operations:
- name: write-entry
method: POST
Plans GKE upgrade, drains nodes sequentially, monitors workload health in Prometheus, validates in Grafana, and notifies platform team.
naftiko: "0.5"
info:
label: "GKE Cluster Upgrade Orchestrator"
description: "Plans GKE upgrade, drains nodes sequentially, monitors workload health in Prometheus, validates in Grafana, and notifies platform team."
tags:
- devops
- kubernetes
- prometheus
- grafana
- slack
capability:
exposes:
- type: mcp
namespace: devops
port: 8080
tools:
- name: gke_cluster_upgrade_orchestrator
description: "Orchestrate gke cluster upgrade orchestrator workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-kubernetes
type: call
call: "k8s.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-prometheus
type: call
call: "prometheus.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-grafana
type: call
call: "grafana.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://accord-k8s.com/api/v1"
authentication:
type: bearer
token: "$secrets.k8s_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: kubernetes-op
method: POST
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Triggers a Cloud Build for a Go microservice, runs tests, pushes the Docker image to Artifact Registry, and deploys to a Kubernetes cluster on GKE.
naftiko: "0.5"
info:
label: "Go Microservice Build and Deploy"
description: "Triggers a Cloud Build for a Go microservice, runs tests, pushes the Docker image to Artifact Registry, and deploys to a Kubernetes cluster on GKE."
tags:
- automation
- go
- docker
- gcp
- google-cloud-platform
- kubernetes
- deployment
capability:
exposes:
- type: mcp
namespace: go-deploy
port: 8080
tools:
- name: build-and-deploy-go-service
description: "Given a GCP project, Go source repo, and target GKE cluster details, build, test, and deploy the service."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: repo_name
in: body
type: string
description: "The Cloud Source Repository name for the Go service."
- name: cluster_name
in: body
type: string
description: "The GKE cluster name."
- name: namespace
in: body
type: string
description: "The Kubernetes namespace to deploy into."
- name: deployment_name
in: body
type: string
description: "The Kubernetes deployment name."
steps:
- name: build-image
type: call
call: "cloud-build.create-build"
with:
project_id: "{{project_id}}"
repo_name: "{{repo_name}}"
- name: deploy-to-gke
type: call
call: "k8s.patch-deployment"
with:
namespace: "{{namespace}}"
deployment_name: "{{deployment_name}}"
image_tag: "{{build-image.image_uri}}"
consumes:
- type: http
namespace: cloud-build
baseUri: "https://cloudbuild.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: builds
path: "/projects/{{project_id}}/builds"
inputParameters:
- name: project_id
in: path
operations:
- name: create-build
method: POST
- type: http
namespace: k8s
baseUri: "https://kubernetes.default.svc"
authentication:
type: bearer
token: "$secrets.k8s_service_token"
resources:
- name: deployments
path: "/apis/apps/v1/namespaces/{{namespace}}/deployments/{{deployment_name}}"
inputParameters:
- name: namespace
in: path
- name: deployment_name
in: path
operations:
- name: patch-deployment
method: PATCH
Queries Grafana dashboard data for Accord monitoring.
naftiko: "0.5"
info:
label: "Grafana Dashboard Query"
description: "Queries Grafana dashboard data for Accord monitoring."
tags:
- monitoring
- grafana
- dashboards
capability:
exposes:
- type: mcp
namespace: monitoring
port: 8080
tools:
- name: get-dashboard
description: "Query dashboard at Accord."
inputParameters:
- name: dashboard_uid
in: body
type: string
description: "The dashboard_uid to look up."
call: "grafana.get-dashboard_uid"
with:
dashboard_uid: "{{dashboard_uid}}"
consumes:
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana_dashboard_query
method: GET
Detects incidents via Prometheus alerts, executes automated runbooks, validates resolution via health checks, logs in ServiceNow, and notifies SRE.
naftiko: "0.5"
info:
label: "Incident Auto Remediation Pipeline"
description: "Detects incidents via Prometheus alerts, executes automated runbooks, validates resolution via health checks, logs in ServiceNow, and notifies SRE."
tags:
- sre
- prometheus
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: sre
port: 8080
tools:
- name: incident_auto_remediation_pipeline
description: "Orchestrate incident auto remediation pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-prometheus
type: call
call: "prometheus.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-servicenow
type: call
call: "servicenow.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
When a Cloud Monitoring alert fires, fetches alert details, queries related logs from Cloud Logging, creates an incident record in Firestore, and publishes to Pub/Sub for on-call notification.
naftiko: "0.5"
info:
label: "Incident Response Pipeline"
description: "When a Cloud Monitoring alert fires, fetches alert details, queries related logs from Cloud Logging, creates an incident record in Firestore, and publishes to Pub/Sub for on-call notification."
tags:
- automation
- gcp
- google-cloud-platform
- observability
- event-driven
- incident-management
capability:
exposes:
- type: mcp
namespace: incident-response
port: 8080
tools:
- name: handle-monitoring-alert
description: "Given a GCP project and alert policy name, fetch alert details, query logs, record the incident, and notify on-call."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: policy_name
in: body
type: string
description: "The Cloud Monitoring alert policy name."
- name: incident_severity
in: body
type: string
description: "The incident severity level (critical, high, medium, low)."
steps:
- name: get-alert
type: call
call: "monitoring.get-alert-policy"
with:
project_id: "{{project_id}}"
policy_name: "{{policy_name}}"
- name: query-related-logs
type: call
call: "cloud-logging.list-entries"
with:
project_id: "{{project_id}}"
filter_expression: "severity>=ERROR AND timestamp>=\"{{get-alert.last_triggered}}\""
- name: create-incident-record
type: call
call: "firestore.create-document"
with:
project_id: "{{project_id}}"
collection: "incidents"
severity: "{{incident_severity}}"
alert_policy: "{{policy_name}}"
triggered_at: "{{get-alert.last_triggered}}"
- name: notify-oncall
type: call
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "oncall-alerts"
message_data: "Incident: {{policy_name}} | Severity: {{incident_severity}} | Record: {{create-incident-record.document_id}}"
consumes:
- type: http
namespace: monitoring
baseUri: "https://monitoring.googleapis.com/v3"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: alert-policies
path: "/projects/{{project_id}}/alertPolicies/{{policy_name}}"
inputParameters:
- name: project_id
in: path
- name: policy_name
in: path
operations:
- name: get-alert-policy
method: GET
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:list"
operations:
- name: list-entries
method: POST
- type: http
namespace: firestore
baseUri: "https://firestore.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: documents
path: "/projects/{{project_id}}/databases/(default)/documents/{{collection}}"
inputParameters:
- name: project_id
in: path
- name: collection
in: path
operations:
- name: create-document
method: POST
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Retrieves Jira issue status for Accord engineering teams.
naftiko: "0.5"
info:
label: "Jira Issue Status"
description: "Retrieves Jira issue status for Accord engineering teams."
tags:
- devops
- jira
- project-management
capability:
exposes:
- type: mcp
namespace: engineering
port: 8080
tools:
- name: get-issue
description: "Look up Jira issue at Accord."
inputParameters:
- name: issue_key
in: body
type: string
description: "The issue_key to look up."
call: "jira.get-issue_key"
with:
issue_key: "{{issue_key}}"
consumes:
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira_issue_status
method: GET
Deploys canary version, monitors error rates in Prometheus, compares against baseline in Grafana, promotes or rolls back, and notifies team.
naftiko: "0.5"
info:
label: "Kubernetes Canary Deployment Pipeline"
description: "Deploys canary version, monitors error rates in Prometheus, compares against baseline in Grafana, promotes or rolls back, and notifies team."
tags:
- devops
- kubernetes
- prometheus
- grafana
- slack
capability:
exposes:
- type: mcp
namespace: devops
port: 8080
tools:
- name: kubernetes_canary_deployment_pipeline
description: "Orchestrate kubernetes canary deployment pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-kubernetes
type: call
call: "k8s.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-prometheus
type: call
call: "prometheus.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-grafana
type: call
call: "grafana.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://accord-k8s.com/api/v1"
authentication:
type: bearer
token: "$secrets.k8s_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: kubernetes-op
method: POST
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Detects a failed Kubernetes CronJob, fetches the job logs from Cloud Logging, and publishes an alert event to Pub/Sub for incident response routing.
naftiko: "0.5"
info:
label: "Kubernetes CronJob Failure Alert Pipeline"
description: "Detects a failed Kubernetes CronJob, fetches the job logs from Cloud Logging, and publishes an alert event to Pub/Sub for incident response routing."
tags:
- automation
- kubernetes
- gcp
- google-cloud-platform
- event-driven
- observability
capability:
exposes:
- type: mcp
namespace: cronjob-alerts
port: 8080
tools:
- name: handle-cronjob-failure
description: "Given a Kubernetes namespace and CronJob name, fetch failure details, query logs, and publish an alert to Pub/Sub."
inputParameters:
- name: namespace
in: body
type: string
description: "The Kubernetes namespace."
- name: cronjob_name
in: body
type: string
description: "The Kubernetes CronJob name."
- name: project_id
in: body
type: string
description: "The GCP project for logging and Pub/Sub."
steps:
- name: get-cronjob
type: call
call: "k8s.get-cronjob"
with:
namespace: "{{namespace}}"
cronjob_name: "{{cronjob_name}}"
- name: fetch-logs
type: call
call: "cloud-logging.list-entries"
with:
project_id: "{{project_id}}"
filter_expression: "resource.type=\"k8s_container\" AND resource.labels.namespace_name=\"{{namespace}}\" AND resource.labels.container_name=\"{{cronjob_name}}\""
- name: publish-alert
type: call
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "cronjob-alerts"
message_data: "CronJob failure: {{cronjob_name}} in {{namespace}}. Last status: {{get-cronjob.last_schedule_status}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://kubernetes.default.svc"
authentication:
type: bearer
token: "$secrets.k8s_service_token"
resources:
- name: cronjobs
path: "/apis/batch/v1/namespaces/{{namespace}}/cronjobs/{{cronjob_name}}"
inputParameters:
- name: namespace
in: path
- name: cronjob_name
in: path
operations:
- name: get-cronjob
method: GET
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:list"
operations:
- name: list-entries
method: POST
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Updates a Kubernetes deployment image, waits for rollout readiness, and queries GCP Cloud Monitoring for error rate on the new revision.
naftiko: "0.5"
info:
label: "Kubernetes Deployment Rollout with Monitoring"
description: "Updates a Kubernetes deployment image, waits for rollout readiness, and queries GCP Cloud Monitoring for error rate on the new revision."
tags:
- automation
- kubernetes
- gcp
- google-cloud-platform
- observability
- deployment
capability:
exposes:
- type: mcp
namespace: k8s-deploy
port: 8080
tools:
- name: rollout-deployment
description: "Given a Kubernetes namespace, deployment name, and new image tag, update the deployment and verify health via Cloud Monitoring."
inputParameters:
- name: namespace
in: body
type: string
description: "The Kubernetes namespace."
- name: deployment_name
in: body
type: string
description: "The Kubernetes deployment name."
- name: image_tag
in: body
type: string
description: "The new Docker image tag to deploy."
- name: project_id
in: body
type: string
description: "The GCP project for monitoring queries."
steps:
- name: update-deployment
type: call
call: "k8s.patch-deployment"
with:
namespace: "{{namespace}}"
deployment_name: "{{deployment_name}}"
image_tag: "{{image_tag}}"
- name: check-error-rate
type: call
call: "monitoring.query-timeseries"
with:
project_id: "{{project_id}}"
filter: "resource.type=\"k8s_container\" AND resource.labels.namespace_name=\"{{namespace}}\" AND resource.labels.container_name=\"{{deployment_name}}\""
consumes:
- type: http
namespace: k8s
baseUri: "https://kubernetes.default.svc"
authentication:
type: bearer
token: "$secrets.k8s_service_token"
resources:
- name: deployments
path: "/apis/apps/v1/namespaces/{{namespace}}/deployments/{{deployment_name}}"
inputParameters:
- name: namespace
in: path
- name: deployment_name
in: path
operations:
- name: patch-deployment
method: PATCH
- type: http
namespace: monitoring
baseUri: "https://monitoring.googleapis.com/v3"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: timeseries
path: "/projects/{{project_id}}/timeSeries"
inputParameters:
- name: project_id
in: path
operations:
- name: query-timeseries
method: GET
Retrieves the Horizontal Pod Autoscaler configuration for a Kubernetes deployment, returning min/max replicas, current replicas, and target CPU utilization.
naftiko: "0.5"
info:
label: "Kubernetes HPA Configuration Retrieval"
description: "Retrieves the Horizontal Pod Autoscaler configuration for a Kubernetes deployment, returning min/max replicas, current replicas, and target CPU utilization."
tags:
- containers
- kubernetes
- scaling
capability:
exposes:
- type: mcp
namespace: k8s-autoscaling
port: 8080
tools:
- name: get-hpa-config
description: "Given a Kubernetes namespace and HPA name, return the autoscaler configuration and current status."
inputParameters:
- name: namespace
in: body
type: string
description: "The Kubernetes namespace."
- name: hpa_name
in: body
type: string
description: "The HorizontalPodAutoscaler name."
call: "k8s.get-hpa"
with:
namespace: "{{namespace}}"
hpa_name: "{{hpa_name}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://kubernetes.default.svc"
authentication:
type: bearer
token: "$secrets.k8s_service_token"
resources:
- name: hpa
path: "/apis/autoscaling/v2/namespaces/{{namespace}}/horizontalpodautoscalers/{{hpa_name}}"
inputParameters:
- name: namespace
in: path
- name: hpa_name
in: path
operations:
- name: get-hpa
method: GET
Retrieves resource quota usage for a Kubernetes namespace and publishes an alert to Pub/Sub if usage exceeds 80% of any limit.
naftiko: "0.5"
info:
label: "Kubernetes Namespace Resource Quotas with Alert"
description: "Retrieves resource quota usage for a Kubernetes namespace and publishes an alert to Pub/Sub if usage exceeds 80% of any limit."
tags:
- containers
- kubernetes
- observability
- resource-management
- event-driven
capability:
exposes:
- type: mcp
namespace: k8s-resources
port: 8080
tools:
- name: get-resource-quotas
description: "Given a Kubernetes namespace and GCP project, check resource quotas and alert if near limits."
inputParameters:
- name: namespace
in: body
type: string
description: "The Kubernetes namespace to inspect."
- name: project_id
in: body
type: string
description: "The GCP project for Pub/Sub alerts."
steps:
- name: fetch-quotas
type: call
call: "k8s.get-resource-quotas"
with:
namespace: "{{namespace}}"
- name: alert-capacity
type: call
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "resource-quota-alerts"
message_data: "Namespace: {{namespace}}, CPU used: {{fetch-quotas.cpu_used}}/{{fetch-quotas.cpu_limit}}, Memory used: {{fetch-quotas.memory_used}}/{{fetch-quotas.memory_limit}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://kubernetes.default.svc"
authentication:
type: bearer
token: "$secrets.k8s_service_token"
resources:
- name: resource-quotas
path: "/api/v1/namespaces/{{namespace}}/resourcequotas"
inputParameters:
- name: namespace
in: path
operations:
- name: get-resource-quotas
method: GET
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Detects pod evictions in Kubernetes, identifies root cause via Prometheus, reschedules workloads, logs in ServiceNow, and notifies ops.
naftiko: "0.5"
info:
label: "Kubernetes Pod Disruption Handler"
description: "Detects pod evictions in Kubernetes, identifies root cause via Prometheus, reschedules workloads, logs in ServiceNow, and notifies ops."
tags:
- devops
- kubernetes
- prometheus
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: devops
port: 8080
tools:
- name: kubernetes_pod_disruption_handler
description: "Orchestrate kubernetes pod disruption handler workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-kubernetes
type: call
call: "k8s.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-prometheus
type: call
call: "prometheus.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-servicenow
type: call
call: "servicenow.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://accord-k8s.com/api/v1"
authentication:
type: bearer
token: "$secrets.k8s_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: kubernetes-op
method: POST
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Fetches the status of a Kubernetes pod and retrieves the last 100 lines of container logs for quick diagnostics.
naftiko: "0.5"
info:
label: "Kubernetes Pod Status with Log Snapshot"
description: "Fetches the status of a Kubernetes pod and retrieves the last 100 lines of container logs for quick diagnostics."
tags:
- containers
- kubernetes
- observability
capability:
exposes:
- type: mcp
namespace: k8s-ops
port: 8080
tools:
- name: get-pod-status
description: "Given a Kubernetes namespace and pod name, return the pod phase, restart count, container statuses, and recent log lines."
inputParameters:
- name: namespace
in: body
type: string
description: "The Kubernetes namespace."
- name: pod_name
in: body
type: string
description: "The name of the pod to inspect."
steps:
- name: fetch-pod
type: call
call: "k8s.get-pod"
with:
namespace: "{{namespace}}"
pod_name: "{{pod_name}}"
- name: fetch-logs
type: call
call: "k8s.get-pod-logs"
with:
namespace: "{{namespace}}"
pod_name: "{{pod_name}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://kubernetes.default.svc"
authentication:
type: bearer
token: "$secrets.k8s_service_token"
resources:
- name: pods
path: "/api/v1/namespaces/{{namespace}}/pods/{{pod_name}}"
inputParameters:
- name: namespace
in: path
- name: pod_name
in: path
operations:
- name: get-pod
method: GET
- name: pod-logs
path: "/api/v1/namespaces/{{namespace}}/pods/{{pod_name}}/log?tailLines=100"
inputParameters:
- name: namespace
in: path
- name: pod_name
in: path
operations:
- name: get-pod-logs
method: GET
Audits RBAC policies in Kubernetes clusters, identifies excessive permissions, creates remediation tasks in Jira, and notifies security team.
naftiko: "0.5"
info:
label: "Kubernetes RBAC Audit Pipeline"
description: "Audits RBAC policies in Kubernetes clusters, identifies excessive permissions, creates remediation tasks in Jira, and notifies security team."
tags:
- security
- kubernetes
- jira
- slack
capability:
exposes:
- type: mcp
namespace: security
port: 8080
tools:
- name: kubernetes_rbac_audit_pipeline
description: "Orchestrate kubernetes rbac audit pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-kubernetes
type: call
call: "k8s.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-jira
type: call
call: "jira.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-servicenow
type: call
call: "servicenow.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://accord-k8s.com/api/v1"
authentication:
type: bearer
token: "$secrets.k8s_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: kubernetes-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
Updates a Kubernetes VirtualService traffic split for canary deployments, shifting a percentage of traffic to the new revision and logging the change.
naftiko: "0.5"
info:
label: "Kubernetes Service Mesh Traffic Routing"
description: "Updates a Kubernetes VirtualService traffic split for canary deployments, shifting a percentage of traffic to the new revision and logging the change."
tags:
- automation
- kubernetes
- gcp
- google-cloud-platform
- deployment
capability:
exposes:
- type: mcp
namespace: traffic-mgmt
port: 8080
tools:
- name: update-traffic-split
description: "Given a Kubernetes namespace, VirtualService name, and traffic percentage, update the canary traffic split and log the change."
inputParameters:
- name: namespace
in: body
type: string
description: "The Kubernetes namespace."
- name: virtual_service_name
in: body
type: string
description: "The Istio VirtualService name."
- name: canary_weight
in: body
type: integer
description: "The percentage of traffic to route to the canary revision."
- name: project_id
in: body
type: string
description: "The GCP project for logging."
steps:
- name: update-route
type: call
call: "k8s.patch-virtual-service"
with:
namespace: "{{namespace}}"
virtual_service_name: "{{virtual_service_name}}"
canary_weight: "{{canary_weight}}"
- name: log-traffic-change
type: call
call: "cloud-logging.write-entry"
with:
project_id: "{{project_id}}"
log_name: "traffic-routing"
message: "Updated {{virtual_service_name}} canary weight to {{canary_weight}}% in {{namespace}}."
consumes:
- type: http
namespace: k8s
baseUri: "https://kubernetes.default.svc"
authentication:
type: bearer
token: "$secrets.k8s_service_token"
resources:
- name: virtual-services
path: "/apis/networking.istio.io/v1beta1/namespaces/{{namespace}}/virtualservices/{{virtual_service_name}}"
inputParameters:
- name: namespace
in: path
- name: virtual_service_name
in: path
operations:
- name: patch-virtual-service
method: PATCH
- type: http
namespace: cloud-logging
baseUri: "https://logging.googleapis.com/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: entries
path: "/entries:write"
operations:
- name: write-entry
method: POST
Fetches performance analytics for a LinkedIn advertising campaign including impressions, clicks, and spend.
naftiko: "0.5"
info:
label: "LinkedIn Ad Campaign Performance Retrieval"
description: "Fetches performance analytics for a LinkedIn advertising campaign including impressions, clicks, and spend."
tags:
- marketing
- linkedin
- analytics
capability:
exposes:
- type: mcp
namespace: ad-analytics
port: 8080
tools:
- name: get-campaign-performance
description: "Given a LinkedIn campaign ID and date range, return impressions, clicks, CTR, and total spend."
inputParameters:
- name: campaign_id
in: body
type: string
description: "The LinkedIn advertising campaign identifier."
- name: start_date
in: body
type: string
description: "The start date in YYYY-MM-DD format."
- name: end_date
in: body
type: string
description: "The end date in YYYY-MM-DD format."
call: "linkedin.get-campaign-analytics"
with:
campaign_id: "{{campaign_id}}"
start_date: "{{start_date}}"
end_date: "{{end_date}}"
consumes:
- type: http
namespace: linkedin
baseUri: "https://api.linkedin.com/v2"
authentication:
type: bearer
token: "$secrets.linkedin_access_token"
resources:
- name: campaign-analytics
path: "/adAnalyticsV2?q=analytics&campaigns=urn:li:sponsoredCampaign:{{campaign_id}}&dateRange.start.day=1&dateRange.start.month=1&dateRange.start.year=2026"
inputParameters:
- name: campaign_id
in: query
operations:
- name: get-campaign-analytics
method: GET
Retrieves a LinkedIn company profile by vanity name and caches the result in Firestore for internal dashboards and competitive intelligence.
naftiko: "0.5"
info:
label: "LinkedIn Company Profile with Firestore Cache"
description: "Retrieves a LinkedIn company profile by vanity name and caches the result in Firestore for internal dashboards and competitive intelligence."
tags:
- partnerships
- linkedin
- marketing
- gcp
- google-cloud-platform
- firestore
capability:
exposes:
- type: mcp
namespace: social-intel
port: 8080
tools:
- name: get-company-profile
description: "Given a LinkedIn company vanity name and GCP project, fetch the profile and cache it in Firestore."
inputParameters:
- name: vanity_name
in: body
type: string
description: "The LinkedIn company vanity name (URL slug)."
- name: project_id
in: body
type: string
description: "The GCP project for Firestore."
steps:
- name: fetch-profile
type: call
call: "linkedin.get-organization"
with:
vanity_name: "{{vanity_name}}"
- name: cache-profile
type: call
call: "firestore.create-document"
with:
project_id: "{{project_id}}"
collection: "linkedin-company-profiles"
vanity_name: "{{vanity_name}}"
follower_count: "{{fetch-profile.follower_count}}"
industry: "{{fetch-profile.industry}}"
consumes:
- type: http
namespace: linkedin
baseUri: "https://api.linkedin.com/v2"
authentication:
type: bearer
token: "$secrets.linkedin_access_token"
resources:
- name: organizations
path: "/organizations?q=vanityName&vanityName={{vanity_name}}"
inputParameters:
- name: vanity_name
in: query
operations:
- name: get-organization
method: GET
- type: http
namespace: firestore
baseUri: "https://firestore.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: documents
path: "/projects/{{project_id}}/databases/(default)/documents/{{collection}}"
inputParameters:
- name: project_id
in: path
- name: collection
in: path
operations:
- name: create-document
method: POST
Fetches LinkedIn page engagement metrics, transforms the data, and loads it into a BigQuery table for long-term analytics and dashboarding.
naftiko: "0.5"
info:
label: "LinkedIn Engagement Analytics to BigQuery"
description: "Fetches LinkedIn page engagement metrics, transforms the data, and loads it into a BigQuery table for long-term analytics and dashboarding."
tags:
- marketing
- linkedin
- gcp
- google-cloud-platform
- bigquery
- analytics
capability:
exposes:
- type: mcp
namespace: social-analytics
port: 8080
tools:
- name: sync-linkedin-engagement
description: "Given a LinkedIn organization ID and GCP project, fetch engagement metrics and load into BigQuery."
inputParameters:
- name: organization_id
in: body
type: string
description: "The LinkedIn organization identifier."
- name: project_id
in: body
type: string
description: "The GCP project for BigQuery."
- name: dataset
in: body
type: string
description: "The BigQuery dataset name."
- name: table
in: body
type: string
description: "The BigQuery table name."
steps:
- name: fetch-engagement
type: call
call: "linkedin.get-page-statistics"
with:
organization_id: "{{organization_id}}"
- name: load-to-bigquery
type: call
call: "bigquery.insert-rows"
with:
project_id: "{{project_id}}"
dataset: "{{dataset}}"
table: "{{table}}"
consumes:
- type: http
namespace: linkedin
baseUri: "https://api.linkedin.com/v2"
authentication:
type: bearer
token: "$secrets.linkedin_access_token"
resources:
- name: page-statistics
path: "/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn:li:organization:{{organization_id}}"
inputParameters:
- name: organization_id
in: query
operations:
- name: get-page-statistics
method: GET
- type: http
namespace: bigquery
baseUri: "https://bigquery.googleapis.com/bigquery/v2"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: tabledata
path: "/projects/{{project_id}}/datasets/{{dataset}}/tables/{{table}}/insertAll"
inputParameters:
- name: project_id
in: path
- name: dataset
in: path
- name: table
in: path
operations:
- name: insert-rows
method: POST
Fetches a LinkedIn job posting and its applicant list, then stores a summary in Firestore for the hiring dashboard.
naftiko: "0.5"
info:
label: "LinkedIn Job Posting with Applicant Summary"
description: "Fetches a LinkedIn job posting and its applicant list, then stores a summary in Firestore for the hiring dashboard."
tags:
- talent
- linkedin
- recruiting
- gcp
- google-cloud-platform
- firestore
capability:
exposes:
- type: mcp
namespace: talent-acquisition
port: 8080
tools:
- name: get-job-posting
description: "Given a LinkedIn job posting ID and GCP project, fetch posting details and applicants, then store a summary in Firestore."
inputParameters:
- name: job_posting_id
in: body
type: string
description: "The LinkedIn job posting identifier."
- name: project_id
in: body
type: string
description: "The GCP project for Firestore."
steps:
- name: fetch-posting
type: call
call: "linkedin.get-job-posting"
with:
job_posting_id: "{{job_posting_id}}"
- name: store-summary
type: call
call: "firestore.create-document"
with:
project_id: "{{project_id}}"
collection: "job-posting-summaries"
posting_id: "{{job_posting_id}}"
title: "{{fetch-posting.title}}"
applicant_count: "{{fetch-posting.applicant_count}}"
consumes:
- type: http
namespace: linkedin
baseUri: "https://api.linkedin.com/v2"
authentication:
type: bearer
token: "$secrets.linkedin_access_token"
resources:
- name: job-postings
path: "/jobPostings/{{job_posting_id}}"
inputParameters:
- name: job_posting_id
in: path
operations:
- name: get-job-posting
method: GET
- type: http
namespace: firestore
baseUri: "https://firestore.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: documents
path: "/projects/{{project_id}}/databases/(default)/documents/{{collection}}"
inputParameters:
- name: project_id
in: path
- name: collection
in: path
operations:
- name: create-document
method: POST
Fetches applicant data from a LinkedIn job posting, compiles it into a structured report, and uploads the export to Google Cloud Storage for hiring team review.
naftiko: "0.5"
info:
label: "LinkedIn Talent Pipeline to GCS Export"
description: "Fetches applicant data from a LinkedIn job posting, compiles it into a structured report, and uploads the export to Google Cloud Storage for hiring team review."
tags:
- talent
- linkedin
- gcp
- google-cloud
- recruiting
- automation
capability:
exposes:
- type: mcp
namespace: talent-export
port: 8080
tools:
- name: export-applicants
description: "Given a LinkedIn job posting ID and GCS bucket, fetch applicants and upload the compiled report."
inputParameters:
- name: job_posting_id
in: body
type: string
description: "The LinkedIn job posting identifier."
- name: bucket_name
in: body
type: string
description: "The GCS bucket for the applicant export."
steps:
- name: get-applicants
type: call
call: "linkedin.list-applicants"
with:
job_posting_id: "{{job_posting_id}}"
- name: upload-report
type: call
call: "gcs.upload-object"
with:
bucket_name: "{{bucket_name}}"
file_name: "applicants-{{job_posting_id}}.json"
consumes:
- type: http
namespace: linkedin
baseUri: "https://api.linkedin.com/v2"
authentication:
type: bearer
token: "$secrets.linkedin_access_token"
resources:
- name: applicants
path: "/jobPostings/{{job_posting_id}}/applicants"
inputParameters:
- name: job_posting_id
in: path
operations:
- name: list-applicants
method: GET
- type: http
namespace: gcs
baseUri: "https://storage.googleapis.com/upload/storage/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: objects
path: "/b/{{bucket_name}}/o?uploadType=media&name={{file_name}}"
inputParameters:
- name: bucket_name
in: path
- name: file_name
in: query
operations:
- name: upload-object
method: POST
When a lead is captured, enriches the lead profile from LinkedIn and publishes an event to GCP Pub/Sub for downstream marketing automation workflows.
naftiko: "0.5"
info:
label: "Marketing Automation Lead Capture to LinkedIn"
description: "When a lead is captured, enriches the lead profile from LinkedIn and publishes an event to GCP Pub/Sub for downstream marketing automation workflows."
tags:
- marketing-automation
- linkedin
- gcp
- google-cloud-platform
- event-driven
capability:
exposes:
- type: mcp
namespace: marketing-leads
port: 8080
tools:
- name: capture-and-enrich-lead
description: "Given a lead email and LinkedIn profile URL, enrich the lead data from LinkedIn and publish to a Pub/Sub topic for marketing automation."
inputParameters:
- name: lead_email
in: body
type: string
description: "The email address of the captured lead."
- name: linkedin_profile_id
in: body
type: string
description: "The LinkedIn profile identifier for enrichment."
- name: project_id
in: body
type: string
description: "The GCP project for Pub/Sub."
steps:
- name: enrich-profile
type: call
call: "linkedin.get-profile"
with:
profile_id: "{{linkedin_profile_id}}"
- name: publish-lead-event
type: call
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "marketing-leads"
message_data: "Lead: {{lead_email}}, Name: {{enrich-profile.first_name}} {{enrich-profile.last_name}}, Company: {{enrich-profile.company_name}}, Title: {{enrich-profile.headline}}"
consumes:
- type: http
namespace: linkedin
baseUri: "https://api.linkedin.com/v2"
authentication:
type: bearer
token: "$secrets.linkedin_access_token"
resources:
- name: profiles
path: "/people/{{profile_id}}"
inputParameters:
- name: profile_id
in: path
operations:
- name: get-profile
method: GET
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Publishes a marketing campaign event to GCP Pub/Sub with campaign metadata, triggering downstream automation workflows for email, social, and analytics processing.
naftiko: "0.5"
info:
label: "Marketing Campaign Event Publisher"
description: "Publishes a marketing campaign event to GCP Pub/Sub with campaign metadata, triggering downstream automation workflows for email, social, and analytics processing."
tags:
- marketing-automation
- event-driven
- gcp
- google-cloud-platform
- automation
capability:
exposes:
- type: mcp
namespace: campaign-events
port: 8080
tools:
- name: publish-campaign-event
description: "Given a GCP project and campaign details, publish a campaign event to Pub/Sub for downstream processing."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: campaign_name
in: body
type: string
description: "The marketing campaign name."
- name: campaign_type
in: body
type: string
description: "The campaign type (email, social, display)."
- name: audience_segment
in: body
type: string
description: "The target audience segment identifier."
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "campaign-events"
message_data: "Campaign: {{campaign_name}}, Type: {{campaign_type}}, Segment: {{audience_segment}}"
consumes:
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Maps service dependencies from Datadog APM, identifies circular dependencies, creates Jira tickets, documents in Confluence, and notifies architects.
naftiko: "0.5"
info:
label: "Microservice Dependency Analyzer"
description: "Maps service dependencies from Datadog APM, identifies circular dependencies, creates Jira tickets, documents in Confluence, and notifies architects."
tags:
- architecture
- datadog
- jira
- confluence
- slack
capability:
exposes:
- type: mcp
namespace: architecture
port: 8080
tools:
- name: microservice_dependency_analyzer
description: "Orchestrate microservice dependency analyzer workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-datadog
type: call
call: "datadog.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-jira
type: call
call: "jira.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-confluence
type: call
call: "confluence.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: datadog
baseUri: "https://api.datadoghq.com/api/v1"
authentication:
type: apiKey
key: "$secrets.datadog_api_key"
header: "DD-API-KEY"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: datadog-op
method: POST
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Initiates failover to secondary GCP region, validates services via health checks, monitors in Prometheus, logs results, and notifies SRE team.
naftiko: "0.5"
info:
label: "Multi-Region Failover Test Pipeline"
description: "Initiates failover to secondary GCP region, validates services via health checks, monitors in Prometheus, logs results, and notifies SRE team."
tags:
- disaster-recovery
- gcp
- prometheus
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: disaster-recovery
port: 8080
tools:
- name: multi_region_failover_test_pipeline
description: "Orchestrate multi-region failover test pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-gcp
type: call
call: "gcp.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-prometheus
type: call
call: "prometheus.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-servicenow
type: call
call: "servicenow.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: gcp
baseUri: "https://compute.googleapis.com/compute/v1/projects/accord"
authentication:
type: bearer
token: "$secrets.gcp_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: gcp-op
method: POST
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Checks TLS certificate expiry on Kubernetes nodes, generates new certs, deploys via rolling update, validates in Prometheus, and notifies ops.
naftiko: "0.5"
info:
label: "Node Certificate Renewal Pipeline"
description: "Checks TLS certificate expiry on Kubernetes nodes, generates new certs, deploys via rolling update, validates in Prometheus, and notifies ops."
tags:
- security
- kubernetes
- prometheus
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: security
port: 8080
tools:
- name: node_certificate_renewal_pipeline
description: "Orchestrate node certificate renewal pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-kubernetes
type: call
call: "k8s.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-prometheus
type: call
call: "prometheus.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-servicenow
type: call
call: "servicenow.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://accord-k8s.com/api/v1"
authentication:
type: bearer
token: "$secrets.k8s_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: kubernetes-op
method: POST
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Queries a Node.js health endpoint, collects response time and status, and writes the results to a Firestore document used by the internal health dashboard.
naftiko: "0.5"
info:
label: "Node.js Service Health Dashboard Update"
description: "Queries a Node.js health endpoint, collects response time and status, and writes the results to a Firestore document used by the internal health dashboard."
tags:
- observability
- node-js
- gcp
- google-cloud-platform
- firestore
capability:
exposes:
- type: mcp
namespace: health-dashboard
port: 8080
tools:
- name: update-health-status
description: "Given a Node.js service URL and GCP project, check health and update the Firestore dashboard document."
inputParameters:
- name: service_url
in: body
type: string
description: "The Node.js service health endpoint URL."
- name: service_name
in: body
type: string
description: "The service name for the dashboard entry."
- name: project_id
in: body
type: string
description: "The GCP project for Firestore."
steps:
- name: check-health
type: call
call: "node-service.get-health"
with:
service_url: "{{service_url}}"
- name: update-dashboard
type: call
call: "firestore.update-document"
with:
project_id: "{{project_id}}"
collection: "service-health"
document_id: "{{service_name}}"
status: "{{check-health.status}}"
response_time_ms: "{{check-health.response_time}}"
consumes:
- type: http
namespace: node-service
baseUri: "{{service_url}}"
authentication:
type: bearer
token: "$secrets.internal_service_token"
resources:
- name: health
path: "/health"
operations:
- name: get-health
method: GET
- type: http
namespace: firestore
baseUri: "https://firestore.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: documents
path: "/projects/{{project_id}}/databases/(default)/documents/{{collection}}/{{document_id}}"
inputParameters:
- name: project_id
in: path
- name: collection
in: path
- name: document_id
in: path
operations:
- name: update-document
method: PATCH
Collects SLO metrics from Prometheus, calculates error budgets in Grafana, creates burn-rate alerts, logs in ServiceNow, and notifies SRE.
naftiko: "0.5"
info:
label: "Observability SLO Compliance Pipeline"
description: "Collects SLO metrics from Prometheus, calculates error budgets in Grafana, creates burn-rate alerts, logs in ServiceNow, and notifies SRE."
tags:
- sre
- prometheus
- grafana
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: sre
port: 8080
tools:
- name: observability_slo_compliance_pipeline
description: "Orchestrate observability slo compliance pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-prometheus
type: call
call: "prometheus.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-grafana
type: call
call: "grafana.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-servicenow
type: call
call: "servicenow.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Retrieves Okta user profile for Accord identity management.
naftiko: "0.5"
info:
label: "Okta User Profile Lookup"
description: "Retrieves Okta user profile for Accord identity management."
tags:
- security
- okta
- identity
capability:
exposes:
- type: mcp
namespace: identity
port: 8080
tools:
- name: get-user
description: "Look up user at Accord."
inputParameters:
- name: user_email
in: body
type: string
description: "The user_email to look up."
call: "okta.get-user_email"
with:
user_email: "{{user_email}}"
consumes:
- type: http
namespace: okta
baseUri: "https://accord.okta.com/api/v1"
authentication:
type: apiKey
key: "$secrets.okta_api_token"
header: "Authorization"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: okta_user_profile_lookup
method: GET
Retrieves PagerDuty incident details for Accord on-call teams.
naftiko: "0.5"
info:
label: "PagerDuty Incident Details"
description: "Retrieves PagerDuty incident details for Accord on-call teams."
tags:
- devops
- pagerduty
- on-call
capability:
exposes:
- type: mcp
namespace: incident-mgmt
port: 8080
tools:
- name: get-incident
description: "Look up incident at Accord."
inputParameters:
- name: incident_id
in: body
type: string
description: "The incident_id to look up."
call: "pagerduty.get-incident_id"
with:
incident_id: "{{incident_id}}"
consumes:
- type: http
namespace: pagerduty
baseUri: "https://api.pagerduty.com"
authentication:
type: bearer
token: "$secrets.pagerduty_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: pagerduty_incident_details
method: GET
Runs load tests via k6, collects metrics from Datadog, analyzes results in Snowflake, generates report in Confluence, and notifies QA team.
naftiko: "0.5"
info:
label: "Performance Load Test Pipeline"
description: "Runs load tests via k6, collects metrics from Datadog, analyzes results in Snowflake, generates report in Confluence, and notifies QA team."
tags:
- performance
- datadog
- snowflake
- confluence
- slack
capability:
exposes:
- type: mcp
namespace: performance
port: 8080
tools:
- name: performance_load_test_pipeline
description: "Orchestrate performance load test pipeline workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-datadog
type: call
call: "datadog.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-snowflake
type: call
call: "snowflake.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-confluence
type: call
call: "confluence.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: datadog
baseUri: "https://api.datadoghq.com/api/v1"
authentication:
type: apiKey
key: "$secrets.datadog_api_key"
header: "DD-API-KEY"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: datadog-op
method: POST
- type: http
namespace: snowflake
baseUri: "https://accord.snowflakecomputing.com/api/v2"
authentication:
type: bearer
token: "$secrets.snowflake_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: snowflake-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Triggers Power BI dataset refresh for Accord reporting.
naftiko: "0.5"
info:
label: "Power BI Refresh Trigger"
description: "Triggers Power BI dataset refresh for Accord reporting."
tags:
- analytics
- power-bi
- reporting
capability:
exposes:
- type: mcp
namespace: analytics
port: 8080
tools:
- name: trigger-refresh
description: "Trigger refresh at Accord."
inputParameters:
- name: dataset_id
in: body
type: string
description: "The dataset_id to look up."
call: "powerbi.get-dataset_id"
with:
dataset_id: "{{dataset_id}}"
consumes:
- type: http
namespace: powerbi
baseUri: "https://api.powerbi.com/v1.0/myorg"
authentication:
type: bearer
token: "$secrets.powerbi_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: power_bi_refresh_trigger
method: GET
Builds a React application via Cloud Build with Vite, uploads static assets to GCS, and purges the Cloud CDN cache to serve the latest version.
naftiko: "0.5"
info:
label: "React Frontend Deployment with CDN Purge"
description: "Builds a React application via Cloud Build with Vite, uploads static assets to GCS, and purges the Cloud CDN cache to serve the latest version."
tags:
- automation
- react
- vite
- gcp
- google-cloud-platform
- deployment
capability:
exposes:
- type: mcp
namespace: frontend-deploy
port: 8080
tools:
- name: deploy-react-app
description: "Given a GCP project, source repo, and GCS bucket, build the React app, upload assets, and purge CDN cache."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: repo_name
in: body
type: string
description: "The source repository with the React/Vite application."
- name: bucket_name
in: body
type: string
description: "The GCS bucket serving the static frontend."
- name: cdn_url_map
in: body
type: string
description: "The Cloud CDN URL map resource name."
steps:
- name: build-frontend
type: call
call: "cloud-build.create-build"
with:
project_id: "{{project_id}}"
repo_name: "{{repo_name}}"
- name: upload-assets
type: call
call: "gcs.upload-object"
with:
bucket_name: "{{bucket_name}}"
file_name: "dist/index.html"
- name: purge-cdn
type: call
call: "gcp-compute.invalidate-cache"
with:
project_id: "{{project_id}}"
url_map: "{{cdn_url_map}}"
consumes:
- type: http
namespace: cloud-build
baseUri: "https://cloudbuild.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: builds
path: "/projects/{{project_id}}/builds"
inputParameters:
- name: project_id
in: path
operations:
- name: create-build
method: POST
- type: http
namespace: gcs
baseUri: "https://storage.googleapis.com/upload/storage/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: objects
path: "/b/{{bucket_name}}/o?uploadType=media&name={{file_name}}"
inputParameters:
- name: bucket_name
in: path
- name: file_name
in: query
operations:
- name: upload-object
method: POST
- type: http
namespace: gcp-compute
baseUri: "https://compute.googleapis.com/compute/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: url-maps
path: "/projects/{{project_id}}/global/urlMaps/{{url_map}}/invalidateCache"
inputParameters:
- name: project_id
in: path
- name: url_map
in: path
operations:
- name: invalidate-cache
method: POST
Retrieves Salesforce account details for Accord sales teams.
naftiko: "0.5"
info:
label: "Salesforce Account Info"
description: "Retrieves Salesforce account details for Accord sales teams."
tags:
- crm
- salesforce
- accounts
capability:
exposes:
- type: mcp
namespace: crm
port: 8080
tools:
- name: get-account
description: "Look up account at Accord."
inputParameters:
- name: account_id
in: body
type: string
description: "The account_id to look up."
call: "salesforce.get-account_id"
with:
account_id: "{{account_id}}"
consumes:
- type: http
namespace: salesforce
baseUri: "https://accord.my.salesforce.com/services/data/v58.0"
authentication:
type: bearer
token: "$secrets.salesforce_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: salesforce_account_info
method: GET
Identifies expiring secrets in Vault, generates new secrets, updates Kubernetes secrets, validates services, logs rotation in ServiceNow.
naftiko: "0.5"
info:
label: "Secret Rotation Orchestrator"
description: "Identifies expiring secrets in Vault, generates new secrets, updates Kubernetes secrets, validates services, logs rotation in ServiceNow."
tags:
- security
- kubernetes
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: security
port: 8080
tools:
- name: secret_rotation_orchestrator
description: "Orchestrate secret rotation orchestrator workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-kubernetes
type: call
call: "k8s.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-servicenow
type: call
call: "servicenow.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: k8s
baseUri: "https://accord-k8s.com/api/v1"
authentication:
type: bearer
token: "$secrets.k8s_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: kubernetes-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Checks Istio service mesh health, monitors latency in Prometheus, detects anomalies via Grafana, creates alerts in ServiceNow, and notifies ops.
naftiko: "0.5"
info:
label: "Service Mesh Health Monitor"
description: "Checks Istio service mesh health, monitors latency in Prometheus, detects anomalies via Grafana, creates alerts in ServiceNow, and notifies ops."
tags:
- service-mesh
- prometheus
- grafana
- servicenow
- slack
capability:
exposes:
- type: mcp
namespace: service-mesh
port: 8080
tools:
- name: service_mesh_health_monitor
description: "Orchestrate service mesh health monitor workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-prometheus
type: call
call: "prometheus.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-grafana
type: call
call: "grafana.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-servicenow
type: call
call: "servicenow.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-slack
type: call
call: "slack.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: prometheus
baseUri: "https://accord-prometheus.com/api/v1"
authentication:
type: bearer
token: "$secrets.prometheus_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: prometheus-op
method: POST
- type: http
namespace: grafana
baseUri: "https://accord-grafana.com/api"
authentication:
type: bearer
token: "$secrets.grafana_api_key"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: grafana-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
Checks ServiceNow incident status for Accord IT operations.
naftiko: "0.5"
info:
label: "ServiceNow Incident Status Check"
description: "Checks ServiceNow incident status for Accord IT operations."
tags:
- itsm
- servicenow
- incident-management
capability:
exposes:
- type: mcp
namespace: itsm
port: 8080
tools:
- name: get-incident
description: "Look up incident at Accord."
inputParameters:
- name: incident_id
in: body
type: string
description: "The incident_id to look up."
call: "servicenow.get-incident_id"
with:
incident_id: "{{incident_id}}"
consumes:
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow_incident_status_che
method: GET
Sends a message to a Slack channel for Accord notifications.
naftiko: "0.5"
info:
label: "Slack Channel Post"
description: "Sends a message to a Slack channel for Accord notifications."
tags:
- collaboration
- slack
- messaging
capability:
exposes:
- type: mcp
namespace: messaging
port: 8080
tools:
- name: send-message
description: "Post to Slack at Accord."
inputParameters:
- name: channel
in: body
type: string
description: "The channel to look up."
call: "slack.get-channel"
with:
channel: "{{channel}}"
consumes:
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack_channel_post
method: GET
Executes SQL queries against Accord Snowflake warehouse.
naftiko: "0.5"
info:
label: "Snowflake Query Executor"
description: "Executes SQL queries against Accord Snowflake warehouse."
tags:
- data
- snowflake
- analytics
capability:
exposes:
- type: mcp
namespace: analytics
port: 8080
tools:
- name: run-query
description: "Run query at Accord."
inputParameters:
- name: sql_query
in: body
type: string
description: "The sql_query to look up."
call: "snowflake.get-sql_query"
with:
sql_query: "{{sql_query}}"
consumes:
- type: http
namespace: snowflake
baseUri: "https://accord.snowflakecomputing.com/api/v2"
authentication:
type: bearer
token: "$secrets.snowflake_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: snowflake_query_executor
method: GET
Searches Splunk indexes for log entries at Accord.
naftiko: "0.5"
info:
label: "Splunk Log Search"
description: "Searches Splunk indexes for log entries at Accord."
tags:
- devops
- splunk
- logging
capability:
exposes:
- type: mcp
namespace: logging
port: 8080
tools:
- name: search-logs
description: "Search Splunk logs for Accord."
inputParameters:
- name: query
in: body
type: string
description: "The query to look up."
call: "splunk.get-query"
with:
query: "{{query}}"
consumes:
- type: http
namespace: splunk
baseUri: "https://accord-splunk.com/services"
authentication:
type: bearer
token: "$secrets.splunk_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: splunk_log_search
method: GET
Collects sprint metrics from Jira, gathers team feedback via Slack polls, generates retro summary in Confluence, creates action items, and notifies scrum master.
naftiko: "0.5"
info:
label: "Sprint Retrospective Automation"
description: "Collects sprint metrics from Jira, gathers team feedback via Slack polls, generates retro summary in Confluence, creates action items, and notifies scrum master."
tags:
- agile
- jira
- confluence
- slack
capability:
exposes:
- type: mcp
namespace: agile
port: 8080
tools:
- name: sprint_retrospective_automation
description: "Orchestrate sprint retrospective automation workflow."
inputParameters:
- name: resource_id
in: body
type: string
description: "Primary resource identifier."
steps:
- name: get-jira
type: call
call: "jira.get-resource"
with:
resource_id: "{{resource_id}}"
- name: process-confluence
type: call
call: "confluence.process-resource"
with:
resource_id: "{{resource_id}}"
- name: create-slack
type: call
call: "slack.create-resource"
with:
resource_id: "{{resource_id}}"
- name: notify-servicenow
type: call
call: "servicenow.notify-resource"
with:
resource_id: "{{resource_id}}"
consumes:
- type: http
namespace: jira
baseUri: "https://accord.atlassian.net/rest/api/3"
authentication:
type: basic
username: "$secrets.jira_user"
password: "$secrets.jira_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: jira-op
method: POST
- type: http
namespace: confluence
baseUri: "https://accord.atlassian.net/wiki/rest/api"
authentication:
type: basic
username: "$secrets.confluence_user"
password: "$secrets.confluence_api_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: confluence-op
method: POST
- type: http
namespace: slack
baseUri: "https://slack.com/api"
authentication:
type: bearer
token: "$secrets.slack_bot_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: slack-op
method: POST
- type: http
namespace: servicenow
baseUri: "https://accord.service-now.com/api/now"
authentication:
type: basic
username: "$secrets.servicenow_user"
password: "$secrets.servicenow_password"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: servicenow-op
method: POST
Retrieves metadata for a Visio diagram from Google Drive and backs up the diagram to a GCS bucket for archival and version control.
naftiko: "0.5"
info:
label: "Visio Diagram Export with GCS Backup"
description: "Retrieves metadata for a Visio diagram from Google Drive and backs up the diagram to a GCS bucket for archival and version control."
tags:
- productivity
- visio
- google
- documentation
- gcp
- google-cloud
capability:
exposes:
- type: mcp
namespace: diagram-mgmt
port: 8080
tools:
- name: get-diagram-metadata
description: "Given a Google Drive file ID and GCS bucket, fetch diagram metadata and upload a backup copy to GCS."
inputParameters:
- name: file_id
in: body
type: string
description: "The Google Drive file ID of the Visio diagram."
- name: bucket_name
in: body
type: string
description: "The GCS bucket for diagram backups."
steps:
- name: fetch-metadata
type: call
call: "gdrive.get-file"
with:
file_id: "{{file_id}}"
- name: backup-to-gcs
type: call
call: "gcs.upload-object"
with:
bucket_name: "{{bucket_name}}"
file_name: "visio-backups/{{fetch-metadata.name}}"
consumes:
- type: http
namespace: gdrive
baseUri: "https://www.googleapis.com/drive/v3"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: files
path: "/files/{{file_id}}?fields=id,name,size,modifiedTime,permissions"
inputParameters:
- name: file_id
in: path
operations:
- name: get-file
method: GET
- type: http
namespace: gcs
baseUri: "https://storage.googleapis.com/upload/storage/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: objects
path: "/b/{{bucket_name}}/o?uploadType=media&name={{file_name}}"
inputParameters:
- name: bucket_name
in: path
- name: file_name
in: query
operations:
- name: upload-object
method: POST
Queries a GCP Cloud Build trigger for the latest Vite frontend build and publishes the build result to Pub/Sub for team notification.
naftiko: "0.5"
info:
label: "Vite Build Status with Notification"
description: "Queries a GCP Cloud Build trigger for the latest Vite frontend build and publishes the build result to Pub/Sub for team notification."
tags:
- devops
- vite
- gcp
- google-cloud-platform
- build
- event-driven
capability:
exposes:
- type: mcp
namespace: frontend-build
port: 8080
tools:
- name: get-vite-build-status
description: "Given a GCP project and Cloud Build trigger ID, fetch the latest build status and publish the result to Pub/Sub."
inputParameters:
- name: project_id
in: body
type: string
description: "The GCP project identifier."
- name: trigger_id
in: body
type: string
description: "The Cloud Build trigger ID for the Vite build."
steps:
- name: fetch-build
type: call
call: "cloud-build.get-build"
with:
project_id: "{{project_id}}"
trigger_id: "{{trigger_id}}"
- name: notify-team
type: call
call: "pubsub.publish"
with:
project_id: "{{project_id}}"
topic_name: "build-notifications"
message_data: "Vite build {{fetch-build.build_id}}: {{fetch-build.status}}, Duration: {{fetch-build.duration}}"
consumes:
- type: http
namespace: cloud-build
baseUri: "https://cloudbuild.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: builds
path: "/projects/{{project_id}}/triggers/{{trigger_id}}/runs?pageSize=1"
inputParameters:
- name: project_id
in: path
- name: trigger_id
in: path
operations:
- name: get-build
method: GET
- type: http
namespace: pubsub
baseUri: "https://pubsub.googleapis.com/v1"
authentication:
type: bearer
token: "$secrets.gcp_access_token"
resources:
- name: topics
path: "/projects/{{project_id}}/topics/{{topic_name}}:publish"
inputParameters:
- name: project_id
in: path
- name: topic_name
in: path
operations:
- name: publish
method: POST
Retrieves employee profile from Workday at Accord.
naftiko: "0.5"
info:
label: "Workday Employee Lookup"
description: "Retrieves employee profile from Workday at Accord."
tags:
- hr
- workday
- employee-data
capability:
exposes:
- type: mcp
namespace: hr
port: 8080
tools:
- name: get-employee
description: "Look up Workday employee at Accord."
inputParameters:
- name: employee_id
in: body
type: string
description: "The employee_id to look up."
call: "workday.get-employee_id"
with:
employee_id: "{{employee_id}}"
consumes:
- type: http
namespace: workday
baseUri: "https://wd5-impl-services1.workday.com/ccx/api/v1/accord"
authentication:
type: bearer
token: "$secrets.workday_token"
resources:
- name: resources
path: "/resources/{{resource_id}}"
operations:
- name: workday_employee_lookup
method: GET