May 5, 2024

The Snowflake Cost Optimization Manual

A comprehensive guide covering everything you need to reduce Snowflake costs.

Sahil Singla

Co-founder of Baselit

This is a manual on Snowflake optimization. Everything you need to know about optimizing Snowflake is covered here. For further information on any specific tip, feel free to reach out to me at sahil@baselit.ai.

  1. Set warehouse auto-suspend to 60 seconds.

Auto-suspend value is the length of time your warehouse will continue to run idle even after a query stops running.Since Snowflake charges for compute by the second, running a warehouse idle is just burning money. To prevent that, set your auto-suspend to 60 seconds.

Caveat: If your warehouse runs customer-facing workloads that are highly sensitive to latency and frequently utilize cache, the 60-second rule may not be a good fit.

You can set it to 30 seconds as well but it can also lead to higher costs in some cases because you are billed for a minimum of 1 minute whenever a warehouse resumes. In case of smaller queries, this can result in double billing.

  1. Find queries that are not pruning effectively

The golden rule for reducing Snowflake cost is to process the least amount of data. Snowflake stores data in micro partitions and every micro partition has metadata associated with it like maximum and minimum values of a column.

For example, let’s say you are querying event data. Applying a filter on the event date = ‘2024-04-22’ would make sure that a significantly large amount of data is never processed because today’s date will not be within the minimum and the maximum of almost 99% of the data you have. This filter not only makes sure that the costs are lower but you'll also get the results quicker.

In the Snowflake world, money = time. If you can find a way to reduce the time for a query, you save money. Partition pruning is a win-win in both aspects. Here's a detailed blog to identify queries that are not pruning well.

  1. Set up resource monitors

Resource monitors do not help you reduce cost but atleast help keep a pulse on it. In simple terms, they help define budget of a warehouse and notify via email if any warehouse goes over budget. Here's a detailed blog on the setup.

  1. 100-250 MB file size for ingestion

File size has a very significant impact on the cost and performance of ingestion in Snowflake. The ideal file size for optimal ingestion performance is between 100 to 250MB per file. Each node in a cluster has 8 cores, and each CPU core can ingest one file at a time. This way, an X-Small with one node can ingest 8 files in parallel.

Moreover, since a micro-partition is approximately 16 MB, having smaller file sizes means they are rewritten multiple times. Let’s say you need to ingest 500 GB of data and you have two options: 1M files of 500KB each or 5K files of 100MB each. Choosing 5K files of 100MB is smarter both in terms of cost and time for ingestion.

  1. Set appropriate query timeouts

Appropriate query timeouts based on the warehouse size and the business use case can prevent expensive mistakes from happening. The default is 2 days which in most of the real-world scenarios is too ambitious. Setting them at a warehouse level makes the most sense. In real-world scenarios, any data that is being consumed by a user proactively, for example: your BI tool could have a timeout of <5 mins.

For transformations, the query timeout can be very different. However, be very careful in setting the timeouts because parallel jobs in Snowflake can add significantly to the time taken by an already scheduled query, ideally to be safe every query should run in less than half the time of the permitted timeout. Here's a detailed blog to setup timeouts and alerts for failed queries.

  1. Identify and remove unused storage using ACCESS_HISTORY

Storage is generally very cheap in Snowflake but unused tables can amount to a significant redundant cost. Snowflake provides the access_history view that parses each of your SQL queries to identify every access of a particular table. This table can be used to identify unused storage efficiently.

If you have a view on top of a table and you only access the view, it will be an indirect access for the table. Snowflake handles those parts as well to provide info that is actually relevant. For a more detailed explanation, you can read this blog.

  1. Identify and remove repeatedly failing workloads

The tricky part is to define what exactly would be a failing workload. A failing workload in general is defined as anything that's repeatedly failing and not being reported. We've tried to templatize failing workloads but this could be very specific to your company. Take a look at this blog if you haven't yet removed failing workloads on your Snowflake.

Apart from saving money, a secondary advantage of this activity is to keep track of what data is relevant for business users and what’s not.

  1. Use incremental models in dbt

There is no simpler way to reduce Snowflake cost than to process less data. If you are using dbt, try to write every model incrementally. Instead of reprocessing all data every time, incremental models only process new or changed data since the last run.

This is achieved primarily by using date stamps or other markers to identify what data has been added or modified since the last run. Full-refreshes are very expensive and should be avoided if possible.

  1. Merge similar warehouses into one multi-cluster warehouse

A workload is generally defined by its performance requirements. Snowflake doesn't guarantee any performance requirements but users chose their warehouse size broadly based on the performance needed.

A multicluster warehouse is effectively a group of warehouses of the same size. Instead of creating dozens of different warehouses, creating multicluster warehouses is a better option because then you defer the task of choosing the warehouse to Snowflake.

Start by merging a couple and compare the cost of the merged warehouse with the sum of the costs of individual warehouses. In most of the cases, the merged warehouse would cost lesser.

Caveat - The maximum number of clusters Snowflake allows is 10, so you need to create a new warehouse when you get close to that limit. You can ask your Snowflake AE to increase it but it's a very custom request.

  1. Use deferred merge technique

Deferred merge is a technique used to reduce ingestion costs. The concept is that instead of merging data with a high frequency, you can create a base table and a delta table with the same schema that has incoming rows for this table.

For example, let's say you have an orders table with a delivery status column. Delivery status can keep changing quickly. Instead of merging rows with updated delivery status every time, you can keep a base table and a delta table for the incoming rows.

Now, if you have to answer the question, 'Which orders are pending but need to be delivered today', you can take a UNION of these and answer the question. Then, MERGE the delta table later with the base table with a lower frequency.

Generally, you'll create a view that is a UNION of these tables to serve the downstream queries. This makes sure that you are not constantly writing data and thus reducing cost of ingestion. Here's a blog where RevenueCat implemented this in their company.

  1. Finding the right warehouse size - an easy trick

Every query ideally has the right warehouse in Snowflake. Remote spillage is the biggest factor in determining warehouse sizes. It's difficult to find the best size for a query though without doing manual experiments. Here's a trick if you want to automate this.

Schedule it as a Serverless task and then check the query_history for its execution. Snowflake also does this iteratively internally, so let it figure out the right size. Then to make sure, you don’t have to pay 1.5x for serverless tasks, once the size is stable, move the workload to that particular size.

Want to hear the latest on Snowflake optimization?

Sign up for our newsletter that we send out (roughly) every two weeks.

Want to hear the latest on Snowflake optimization?

Sign up for our newsletter that we send out (roughly) every two weeks.

Stay updated with strategies to optimize Snowflake.

Backed by

© Merse, Inc. dba Baselit. All rights reserved.

Stay updated with strategies to optimize Snowflake.

Backed by

© Merse, Inc. dba Baselit. All rights reserved.