Skip to content

Latest commit

 

History

History
175 lines (132 loc) · 9.13 KB

scaleway.md

File metadata and controls

175 lines (132 loc) · 9.13 KB

Getting Started with kops on Scaleway

WARNING: Scaleway support on kOps is currently in alpha, which means that it is in the early stages of development and subject to change, please use with caution.

Features

  • Create, update and delete clusters
  • Create, edit and delete instance groups --> Editable fields include but are not limited to:
    • Instance image
    • Instance size (also called commercial type)
  • Migrating from single to multi-master

Next features to implement

  • Autoscaler support
  • Private network
  • BareMetal servers

Requirements

Optional

  • SSH key : creating a cluster can be done without an SSH key, but it is required to update it. id_rsa and id_ed25519 keys are supported
  • Domain name : if you want to host your cluster on your own domain, you will have to register it with Scaleway.

Environment Variables

Enable Scaleway

Since Scaleway support is currently in alpha, it is feature gated and you will need to set this variable:

export KOPS_FEATURE_FLAGS="Scaleway"

Scaleway Credentials

To be able to use Scaleway APIs, it is required to set up your credentials in the environment. You have two ways to pass your credentials:

  1. If you are already familiar with Scaleway's DevTools, then you probably have a config file (its default location is $HOME/.config/scw/config.yaml). If so, you can use the profile of your choice by setting:
export SCW_PROFILE="my-profile"
  1. If not, you can directly set the credentials in your environment:
export SCW_ACCESS_KEY="my-access-key"
export SCW_SECRET_KEY="my-secret-key"
export SCW_DEFAULT_PROJECT_ID="my-project-id"

NB: Keep in mind that the profile is checked first and the environment second, so if you set both, the environment variables will override the information in the config file (profile).

S3 Bucket credentials

For kOps to be able to read and write configuration to the state-store bucket, you'll need to set up the following environment variables. The credentials can be the same as in the previous section, but they don't have to be.

export KOPS_STATE_STORE=scw://<bucket-name> # where <bucket-name> is the name of the bucket you set earlier
# Scaleway Object Storage is S3 compatible so we just override some S3 configurations to talk to our bucket
export S3_REGION=fr-par                     # or another scaleway region providing Object Storage
export S3_ENDPOINT=s3.$S3_REGION.scw.cloud  # define provider endpoint
export S3_ACCESS_KEY_ID="my-access-key"     # where <my-access-key> is the S3 API Access Key for your bucket
export S3_SECRET_ACCESS_KEY="my-secret-key" # where <my-secret-key> is the S3 API Secret Key for your bucket

Creating a Single Master Cluster

# This creates a cluster with no DNS in zone fr-par-1
kops create cluster --cloud=scaleway --name=my.cluster --zones=fr-par-1 --dns=none --yes
# This creates a cluster with the Scaleway DNS (on a domain name that you own and have registered with Scaleway) in zone pl-waw-1
kops create cluster --cloud=scaleway --name=mycluster.mydomain.com --zones=pl-waw-1 --yes 
# This creates a cluster with the gossip DNS in zone nl-ams-2. This is not recommended since the no-DNS option is available because it is more secure.
kops create cluster --cloud=scaleway --name=mycluster.k8s.local --zones=nl-ams-2 --yes

These basic commands create a cluster with default parameters:

  • Container Network Interface = cilium. To change it, set the flag --networking=calico. To see the list of supported CNIs, check the networking page
  • Instance type = DEV1-M. To change it, set the flag --node-size=PRO2-XS and/or --control-plane-size=PRO2-XS
  • Instance image = ubuntu_jammy. To change it, set the flag --node-image=ubuntu_focal and/or --control-plane-image=ubuntu_focal

NB: For now, you can only create a kops cluster in a single availability zone (fr-par-1, fr-par-2, fr-par-3, nl-ams-1, nl-ams-2, nl-ams-3, pl-waw-1, pl-waw-2).

Next steps

Now that you have a working kops cluster, read through the recommendations for production setups guide to learn more about how to configure kops for production workloads. For example, you can migrate your cluster to high-availability.

Editing your cluster

# This opens the cluster's configuration file in a text editor for you to make the desired changes
kops edit cluster mycluster.k8s.local --state=scw://my-state-store
# This applies the changes
kops update cluster mycluster.k8s.local --yes

Deleting your cluster

kops delete cluster mycluster.k8s.local --yes

Terraform support

kOps offers the possibility to generate a Terraform configuration corresponding to the cluster that would have been created directly otherwise.

You can find more information on the dedicated page on kOps Terraform support or Scaleway's Terraform provider's documentation.

For clusters without load-balancers

This concerns clusters using Scaleway DNS.

kops create cluster --cloud=scaleway --name=mycluster.mydomain.com --zones=fr-par-1 --target=terraform --out=$OUTPUT_DIR
cd $OUTPUT_DIR
terraform init
terraform apply

kOps will generate a kubernetes.tf file in the output directory of your choice, you just have to initialize Terraform and apply the configuration. NB: keep in mind that every new call to kOps with the flags --target=terraform --out=$OUTPUT_DIR will overwrite kubernetes.tf so any changes that you made to it will be lost.

For clusters with load-balancers

This concerns clusters using no DNS and gossip DNS. For these types of cluster, a small trick is needed because kOps doesn't know the IPs of the load-balancer at the time of writing the instances' cloud-init configuration, so we will have to run an update, then a rolling-update.

Creating a valid cluster

kops create cluster --cloud=scaleway --name=my.cluster --zones=fr-par-1 --target=terraform --out=$OUTPUT_DIR
cd $OUTPUT_DIR
terraform init
terraform apply
# Now that the load-balancer is up, we update the cluster to integrate its IP to the instances' configuration
kops update cluster my.cluster --target=terraform --out=$OUTPUT_DIR
# Then we replace the instances's for them to reboot with the new configuration (the --cloudonly flag is needed because the cluster can't be validated at this point)
kops rolling-update cluster my.cluster --cloudonly --yes

Keeping the Terraform state consistent after a rolling-update

Now that the instances have been replaced by the rolling-update, your cluster can now be validated. However, since resources have changed outside of Terraform, the state is now invalid. If you need to keep the state consistent with the cluster, you should import the new instances. This can be achieved with this script:

# First we need to retrieve the names of the instances
cd "$OUTPUT_DIR" || exit
TF_SERVERS=($(grep 'resource "scaleway_instance_server"' < kubernetes.tf | awk '{print $3}' | cut -d'"' -f 2))
# Then we get the zone for the import
ZONE=$(terraform output zone | cut -d '"' -f2)
# And for each instance:
for SERVER in "${TF_SERVERS[@]}"; do
  # We remove the stale instance from the state
  terraform state rm scaleway_instance_server.$SERVER
  # We fetch its new ID
  NEW_SERVER_ID=$(scw instance server list zone=$ZONE name=$SERVER -o template="{{ .ID }}")
  if [ "$NEW_SERVER_ID" == "" ]; then
    echo "could not find new ID of the server $SERVER"
  fi
  # We import the new instance in the state
  terraform import scaleway_instance_server.$SERVER $ZONE/$NEW_SERVER_ID
done

NB: for the script to run, you will need to have the Scaleway CLI installed. You can also fetch the IDs of the new instances manually in the Scaleway Console but if you have a lot of them this may not be practical. If you need help with the CLI, these resources might help: