Subscribe for Pipekit updates.

Get the latest articles on all things Pipekit & data orchestration delivered straight to your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Follow Pipekit

How to Clean Up Pods and Save Logs with Argo Workflows

Argo Workflows is an open source container-native workflow engine, hosted by CNCF. Argo Workflows makes it easy to automate and manage complex workflows on Kubernetes. It’s also a very handy tool for various use cases, including machine learning, ETL and data analytics, data streaming pipelines, etc. And you can implement it as a Kubernetes CRD, making it very easy to deploy and manage.

Argo Workflows defines multi-step workflows as a sequence of tasks where each step in the workflow runs as a container. This article explains how to clean up pods created by a workflow once it has run, and save any logs generated during workflow execution to an S3 bucket.

Pod Garbage Collection

Garbage collection is necessary to clean up Kubernetes cluster resources. When a workflow execution completes, pods created by that workflow are not deleted. This can cause clutter if you’re running multiple complex workflows on your Kubernetes cluster, creating unnecessary performance bottlenecks for the API server. Leveraging different garbage collection strategies to delete pods helps you avoid this.

The following are the four garbage collection strategies used by Argo Workflows:

  • OnPodCompletion: Deletes the pod once its work is completed. This mode will delete the pod even if it encounters an error or failure.
  • OnPodSuccess: Deletes the pod once its workload is completed successfully.
  • OnWorkflowCompletion: Deletes all the pods in the workflow once the whole workflow execution is completed. Once again, this mode will delete the pods even if the workflow fails.
  • OnWorkflowSuccess: Deletes all the pods in the workflow once the whole workflow has run successfully.

You can apply these four garbage collection strategies at either pod or workflow level. When applied at a pod level, garbage collection will immediately terminate a pod (based on the policy applied and pod execution status) and won’t wait for the execution results of subsequent pods in the workflow. However, if it’s applied at a workflow level, no pod will be terminated until the entire workflow is done. Again, this will be based on the applied policy and the workflow execution status.

The garbage collection strategy you should use will depend on the use case. For example, when creating a workflow, you may want to set the garbage collection strategy to OnPodSuccess so you can debug a step that may have failed. Once you’re confident about the workflow execution and want to deploy to production, you may change the strategy to OnPodCompletion to clear up resources faster. For more complex workflows, setting garbage collection at a workflow level enables you to verify the functionality of every step at a granular level.

{% cta-1 %}

Benefits of Storing Logs in S3

For complex workflows, it’s not possible to debug potential issues during workflow execution by connecting to the cluster and checking logs from each pod. Some developers on the team may not even have direct access to a production Kubernetes cluster for security reasons. Storing logs from different pods in a centralized location makes it simpler for them to debug any issues. Once the garbage collection strategy removes a pod, its logs are also gone. It makes sense to save these logs elsewhere—preferably on a cloud object storage medium like AWS S3.

Storing the logs for a certain period can also help analyze the execution behavior of your workflow when encountering issues. These logs can come in handy to figure out what changed within your workflow execution.

This centralized location can also store the output generated by each step and use that as an input for the next step. These are called input/output artifacts and they are critical components of a workflow because different steps in a workflow are interconnected and interdependent.

How to Clean Up Pods and Save Logs to an S3 Bucket

Let’s now see this in practice. For this exercise, you need access to an AWS account and a running EKS cluster, which you can easily create using eksctl.

Step 1

You’ll be executing all the commands on a CloudShell terminal. So open CloudShell and run the following command to install kubectl and verify that the nodes are up and running:

define argo
Kubernetes Nodes

Step 2

If the cluster is ready (as in our case), run the following commands to install the Argo CLI:

Argo CLI Installation

Step 3

Now run the following commands to install Argo Workflows:

define argo
Argo WorkFlows Installation

Step 4

Go to the AWS S3 console and create a bucket. This bucket will store the workflow logs and artifacts.

S3 bucket in AWS console

Step 5

You now have to attach the following inline policy to the IAM role associated with the cluster nodes. This is so that the nodes can access the S3 bucket (make sure to change the S3 bucket name accordingly):

IAM policy for the role associated with the cluster nodes

Step 6

Edit the Argo Workflow config map using the kubectl edit cm -n argo workflow-controller-configmap command and add the following data to enable it using the S3 bucket as an artifact repository. Once again, change the bucket name accordingly:

Argo Workflow ConfigMap

Step 7

Create a workflow file named workflow.yaml with the following definition:

Step 8

Run the workflow with the argo -n argo submit workflow.yaml --watch command.

define argo
Argo Workflow execution status

The workflow definition has two steps. In the first step, a message is stored in /tmp/hello_argo.txt as an artifact. The second step fetches it, stores it in /tmp/message, and prints the file content. This step doesn’t have an output artifact.

Step 9

If you now check the S3 bucket, you’ll see a folder with the same name as your workflow. Your artifact will be stored here and you should have a hello-argo.tgz file there. If you check the workflow definition, this is the name specified for the output artifact.

Argo Workflow output artifact

If you download and unzip it, the extracted file name will be hello_argo.txt, which contains the message “hello Argo.” Next, run the argo -n argo logs <workflow_name> command to find the workflow output. Make sure to change the “workflow_name” placeholder accordingly:

Argo Workflow output

{% related-articles %}

Step 10

Now run the kubectl get pods -n argocommand. You’ll see the pods created by the workflow are still there with a “Completed” status.

define argo
Pod status

Step 11

To clean up the pods, modify the workflow definition to add a pod garbage collection strategy with the podGC parameter:

Re-run the workflow and check the pods. This time, there won’t be any new pods with the “Completed” status—because the garbage collection policy terminated them.

define argo
Pod status with garbage collection policy

Conclusion

Although the example workflow in this tutorial was very simple, it shows that setting up S3 as an artifact repository for your Argo workflows is very easy. Setting up an artifact repository not only helps in storing logs for further analysis/debugging but can also be used to pass artifacts between different steps in the job.

You should also now know why garbage collection strategies are important and how to set them up for Argo Workflows. Using a proper garbage collection strategy ensures that pods get deleted efficiently so that the cluster doesn’t get cluttered.

Pipekit is the control plane for Argo Workflows, which seamlessly orchestrates complex workflows. It can also quickly set up your data pipelines and scale them seamlessly. To discover how easy it is to set up a data pipeline with Pipekit, book your personalised demo.

Are your data pipelines scalable and reliable?

Operating data pipelines at scale doesn't have to be unreliable and costly. Put an end to the stress of unreliable data pipelines and data engineering backlogs and turn data into revenue-boosting insights. Pipekit can help.

Pipekit is a self-serve data platform that configures Argo Workflows on your infrastructure to offer simplicity and efficiency when it comes to data workflows. Achieve higher scalability for your data pipelines while significantly reducing your cloud spend. Our platform is designed to align your data infrastructure seamlessly with your full-stack infrastructure, all on Kubernetes.

Try out Pipekit for free today - pipekit.io/signup

Try Pipekit free

Join Pipekit for a free 30-day trial.
No credit card required.

Start free trial
  • blue checkmark vector

    Boost pipeline speed & reliability

  • blue checkmark vector

    Streamline engineering resources

  • blue checkmark vector

    Accelerate data-to-value

  • blue checkmark vector

    Standardize workflow and app deployments

More

Tutorials

Using Helm Charts to Deploy Argo Workflows on Kubernetes

6 min read
Tutorials

How to Set up Logging for Argo Workflows

11 min read
Tutorials

How to Pass Key-Values between Argo Workflows Part 2

7 min read