❄️ Snowflake Utilities
Submitted by Zachary Blackwood
Summary
Utilities for Streamlit-in-Snowflake
Functions
get_table
Get a Snowpark table for use in building a query.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table_name
|
str
|
Name of the table to retrieve |
required |
Returns:
Type | Description |
---|---|
Table
|
sp.Table: A cached Snowpark Table object that can be used for querying. The result is cached so that metadata is not re-fetched from the database. |
Source code in src/streamlit_extras/snowflake/connection.py
Import:
- You should add this to the top of your .py file
run_snowpark
Convert a Snowpark DataFrame to a pandas DataFrame and cache the result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
df
|
DataFrame
|
The Snowpark DataFrame to convert |
required |
ttl
|
timedelta | int | None
|
Time-to-live for the cache. Defaults to 2 hours. Set to None to use the default cache invalidation. |
timedelta(hours=2)
|
lowercase_columns
|
bool
|
Whether to convert column names to lowercase. Defaults to True. |
True
|
Returns:
Type | Description |
---|---|
DataFrame
|
pd.DataFrame: The converted pandas DataFrame with cached results |
Source code in src/streamlit_extras/snowflake/connection.py
Import:
- You should add this to the top of your .py file
run_sql
Execute a SQL query and cache the results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
query
|
str
|
The SQL query to execute |
required |
ttl
|
timedelta | int | None
|
Time-to-live for the cache. Defaults to 2 hours. Set to None to use the default cache invalidation. |
timedelta(hours=2)
|
lowercase_columns
|
bool
|
Whether to convert column names to lowercase. Defaults to True. |
True
|
Returns:
Type | Description |
---|---|
pd.DataFrame: The query results as a pandas DataFrame |
Source code in src/streamlit_extras/snowflake/connection.py
Import:
- You should add this to the top of your .py file
Examples
snowpark_example
def snowpark_example():
from snowflake.snowpark.functions import col
df = (
get_table("snowflake.information_schema.tables")
.select("table_name", "table_schema", "created")
.where(col("table_type") == "VIEW")
.limit(10)
)
st.dataframe(run_snowpark(df))