Skip to content

Karrots YAML File

This is the default karrots.yaml file created in a cluster control repo. Below this file you will find an explanation of each element of the file.

Note

Eventually we plan to create a web service that will help you create this file. Stay tuned.

kubernetes:
  # hosting provider (eks, gke)
  provider: eks
  # cli tool profile name (gcloud, aws-cli)
  profile: karrots
  # cluster base name (karrots will propose a full cluster name using this and the branch name)
  clusterBaseName: karrots-helloworld
  # provider account organization name 
  organizationId: org
  # provider account number
  accountId: 0123456789
  # provider project id/name
  projectId: zerodiff
  dns:
    # domain name where the cluster will route
    domainName: zerodiff.org
    # root domain setup info
    root:
      # automate insertion of the subdomain ns record into the root domain
      # (if false, then set the acme challenge to staging until you create the ns record by-hand, then set it to prod.)
      addSubdomainNS: true
      # your orginazation's primary dns root zone id/name
      zoneName: Z0123456789
      # the root project id that owns the root dns resolver
      projectId: zerodiff
      # the account delegate that allows us to write to the root dns zone
      delegateRoleArn: arn:aws:iam::0123456789:role/karrots-dns
  gitDeployKey:
    # process to generate and install the deploy key manual, github (automated), gitlab (automated)
    process: github
  # master control plane is regional
  mcpIsRegional: false
  # primary provider region. e.g.: us-west-1 (eks), us-west1 (gke)
  region: us-west-1
  # primary provider availability zone. e.g.: us-west-1a (eks), us-west1-a (gke)
  primaryZone: us-west-1a
  # provider zone list (for vpc, etc.) e.g.: "us-west-1a", "us-west-1b" (eks), "us-west1-a" (gke)
  zoneList: ["us-west-1a", "us-west-1b"]
  # node pool characteristics
  nodePool:
    name: primary
    maxSize: 3
    instanceType: t3.medium
  # let's encrypt acme challenge url 
  # (best to leave the staging url and change the host record after the cluster is up otherwise you might get rate-limited if something goes wrong')
  acmeChallengeURL: https://acme-staging-v02.api.letsencrypt.org/directory
  fluxcd:
    # if your hosts require known ssh hosts (by id) add them here
    sshKnownHosts:
  baseServices:
    ambassador:
      enabled: true
    rbac-manager:
      enabled: true
    sealed-secrets:
      enabled: true
    sumologic:
      enabled: true