Adding PVs to PAWS data pipeline

The PAWS team submitted this version bump to cfp-sandbox-cluster to add some PVs, I’ll document the journey here:

Here are the changes in the underlying project repo that correspond to this tag bump:

Initially, the GitHub actions connected to pushing to main didn’t fire when I merged the PR. I checked and found that there was an ongoing issue with GitHub Actions. I checked back later and the issue had cleared but our actions had not run yet. So I gave it a nudge by pulling down the latest main branch and pushing an empty commit:

git checkout main
git pull
git commit --allow-empty -m "chore: empty commit to re-trigger GH actions"
git push

This got the actions running. The action to prepare new manifests for deployment to the cluster failed though and I found this in the logs:

It looks like that comes from the duplicate kind key added here:

@Cris could you make a correction and push a new tag?

@Cris from a paws-data-pipeline working tree, here’s how you can verify the helm chart rendering locally:

helm template \
  --namespace paws-data-pipeline \
  --release-name paws-data-pipeline \
  ./src/helm-chart \
| yamllint -

I installed this yamllint command with:

pip3 install yamllint

I also find it very helpful to pipe helm template output into VSCode. VSCode can’t detect language from STDIN, so you want to open the command pallet and search for Change Language Mode and then select YAML. If you have the Microsoft Kubernetes vscode extension installed, VSCode will automatically detect that there are k8s manifests in your yaml and provide deep inline validation AND hover documentation

helm template \
  --namespace paws-data-pipeline \
  --release-name paws-data-pipeline \
  ./src/helm-chart \
| code -

Will do. Thank you! …

1 Like

Thanks for the pointers!

I think it’s correct now but yamllint does not like the indentation at line 79 :

  - hostnames:
    - server

It appears to want another level of indentation but that seems correct and VS Code seems happy with it.

I’ve updated the kustomization.yaml file in BitWarden. Can you apply that new file before running this?

Thanks for your patience,

@Cris I’ve applied your latest secrets from BitWarden

There was one new error in the YAML, I hope you don’t mind I went ahead and patched it and tagged v2.30:

That’s deployed and now the server container is in a crash loop:

SL_TOKEN doesn’t appear in the latest kustomization.yaml I grabbed from BitWarden so perhaps that one got left out?

@Cris you don’t have to switch over now, but we have a new way for projects to deploy secrets that is documented in the cluster docs here: Sealed Secrets - Philly (sandbox) Civic Cloud

With sealed secrets, you can encrypt secrets for the project and commit them directly to the public cluster repository alongside your project’s configuration. This means they get deployed like any other configuration changes to your deployment, with no manual side steps to coordinate

Over on the cfp-live-cluster repository, you can see an example of this in the code-for-philly.secrets/ tree next to code-for-philly/. There’s nothing special about the name of this directory except that it’s easy to know what it is, what matters is that it is outside the code-for-philly/ directory that get rendered through helm so it’s just more static manifests to get gobbled up in the cluster deployment. You would want to PR yours into the cfp-sandbox-cluster repository at paws-data-pipeline.secrets/

Thanks for fixing the missing ‘kind:’ - looks like I failed to add back in when I was trying to combine the two PVC files.

I see what’s going on in the crash. I’ll fix and update tonight.

Thank you!

@chris all the secrets were in the file but one of the developers didn’t have the code to pull it from the environment - fixed!

Sealed secrets looks slick - will work on for next deploy.

Thanks for all your help.