Block: Workspace
Field | Type | Description |
---|
edition: | string | The SDF edition, should always be 1.3 (1.2 is deprecated) |
name: | string | The name of this workspace (defaults to the workspace directory name if not given) Name must be set for deployment. |
is-tablename-case-sensitive: | boolean | Whether table names are treated as case sensitive in this workspace Applies globally to the entire project, and is non-overridable. Defaults to false
Note: this setting is only effective when set on the root workspace file, if set on included workspaces it will be ignored |
description: | string | A description of this workspace |
repository: | string | The URL of the workspace source repository (defaults to ‘none’ if no repository is given) |
includes: | array | An array of directories and filenames containing .sql and .sdf.yml files |
excludes: | array | An array of directories and filenames to be skipped when resolving includes |
dependencies: | array | Dependencies of the workspace to other workspaces or to cloud database providers |
integrations: | array | The integrations for this environment |
defaults: | Defaults? | Defaults for this workspace |
source-locations: | array | Workspace defined by these set of files |
vars: | object | A map of named values for setting SQL variables from your environment Ex. -dt: dt, used in SQL as @dt, and in Jinja as |
dbt: | DbtConfig? | Configuration for dbt integration |
Block: Table
Field | Type | Description |
---|
name: | String | The name of the table (syntax: [[catalog.]schema].table)
Note: this field is typed as a [QualifiedName] for serialization. In almost all cases you should use the [Self::fqn()] method to get the fully qualified [TableName] instead of accessing this field directly. |
description: | string? | A description of this table |
dialect: | Dialect? | The dialect of this table, defaults to undefinedtrinoundefined |
materialization: | Materialization? | The table-type of this table (new version) |
warehouse: | string? | The warehouse where this table is computed |
purpose: | TablePurpose? | Specify what kind of table or view this is |
origin: | TableOrigin | The origin of this table <remote> or <local> |
exists-remotely: | boolean? | Whether the table exists in the remote DB (used for is_incremental macro) |
table-location: | TableLocation? | Specify table ,location, defaults to none if not set |
creation-flags: | TableCreationFlags? | Defines the table creation options, defaults to none if not set |
incremental-options: | IncrementalOptions? | Options governing incremental table evaluation (only for incremental tables) |
snapshot-options: | SnapshotOptions? | Options governing snapshot table evaluation (only for snapshot tables) |
dependencies: | array | All tables that this table depends on (syntax: catalog.schema.table) |
depended-on-by: | array | All tables that depend on this table (syntax: catalog.schema.table) |
columns: | array | The columns of the schema: name, type, metadata |
partitioned-by: | array | The partitioning format of the table |
severity: | Severity? | The default severity for this tables tests and checks |
tests: | array | |
schedule: | string | The schedule of the table [expressed as cron] |
starting: | string | The first date of the table [expressed by prefixes of RFC 33] |
classifiers: | array | An array of classifier references |
reclassify: | array | Array of reclassify instructions for changing the attached classifier labels |
lineage: | Lineage? | Lineage, a tagged array of column references |
location: | string? | Data is at this location |
file-format: | FileFormat? | Store table in this format [only for external tables] |
with-header: | boolean? | CSV data has a header [only for external tables] |
delimiter: | string? | CSV data is separated by this delimiter [only for external tables] |
compression: | CompressionType? | Json or CSV data is compressed with this method [only for external tables] |
cycle-cut-point: | boolean? | If this table is part of a cyclic dependency then cut the cycle here |
source-locations: | array | Table is defined by these .sql and/or .sdf files |
sealed: | boolean? | This table is either backed by a create table ddl or by a table definition in yml that is the table’s complete schema |
meta: | object | Metadata for this table |
Block: Classifier
Field | Type | Description |
---|
name: | string | The name of the classifier type |
description: | string | A description of this classifier type |
labels: | array | Named classifier labels |
scope: | Scope | Scope of the classifier: table or column |
cardinality: | Cardinality | Cardinality of the classifier: zero-or-one, one or zero-or-many |
propagate: | boolean | Does the classifier propagate from scope to scope or is it a one scope marker |
source-locations: | array | Classifier defined by these set of .sdf files |
Block: Function
Field | Type | Description |
---|
name: | String | The name of the function [syntax: [[catalog.]schema].function] |
section: | string | The function category |
dialect: | Dialect? | The dialect that provides this function |
description: | string | A description of this function |
variadic: | Variadic | Arbitrary number of arguments of an common type out of a list of valid types |
kind: | FunctionKind | The function kind |
parameters: | [Parameter]? | The arguments of this function |
optional-parameters: | [OptionalParameter]? | The arguments of this function |
returns: | Parameter? | The results of this function (can be a tuple) |
constraints: | object | The constraints on generic type bounds |
binds: | array | The generic type bounds |
volatility: | Volatility | volatility - The volatility of the function. |
examples: | array | example - Example use of the function (tuple with input/output) |
cross-link: | string | cross-link - link to existing documentation, for example: https://trino.io/docs/current/functions/datetime.html#date_trunc |
reclassify: | array | Array of reclassify instructions for changing the attached classifier labels |
source-locations: | array | Function defined by these set of .sdf files |
implemented-by: | FunctionImplSpec? | |
special: | boolean | Function can be called without parentheses, e.g. as if it were a constant, e.g. current_date |
Block: Credential
Field | Type | Description |
---|
name: | string | The name of the credential (default = ‘default’) |
description: | string? | A description of this credential |
source-locations: | [FilePath]? | Credential defined by these set of .sdf files |
Options
sdf
snowflake
Field | Type | Description |
---|
type: | string | |
account-id: | string | The account id of your snowflake account (e.g. <orgname>.<accountname> or <locator>) |
username: | string | The user name to connect to snowflake |
password: | string? | The password to connect to snowflake (mutually exclusive with private_key_path) |
private-key-path: | string? | The path to the private key file for key pair authentication (mutually exclusive with password) |
private-key-pem: | string? | The private key in PEM format (mutually exclusive with private_key_path) |
private-key-passphrase: | string? | The passphrase for the private key, if it’s encrypted |
role: | string? | The role to use for the connection |
warehouse: | string? | The warehouse to use for the connection |
bigquery
Field | Type | Description |
---|
type: | string | |
client-email: | string | The client email of your bigquery service account |
private-key: | string | The base64-encoded private key of your bigquery service account |
project-id: | string | Project Id |
aws
Field | Type | Description |
---|
type: | string | |
profile: | string? | The name of th profile to use in the AWS credentials file |
default-region: | string? | The default region to use for this profile |
role-arn: | string? | The arn of the role to assume |
external-id: | string? | The external id to use for the role |
use-web-identity: | boolean? | Whether to use a web identity for authentication |
access-key-id: | string? | The access key id to use for the connection |
secret-access-key: | string? | The secret access key to use for the connection |
session-token: | string? | The session token to use for the connection |
openai
Field | Type | Description |
---|
type: | string | |
api-key: | string | The api key to use for the connection |
empty
Field | Type | Description |
---|
type: | string | |
Block: Environment
Field | Type | Description |
---|
name: | String | The name of this workspace (defaults to the workspace directory name if not given) Name must be set for deployment. |
description: | string | A description of this workspace |
repository: | string | The URL of the workspace source repository (defaults to ‘none’ if no repository is given) |
includes: | array | An array of directories and filenames containing .sql and .sdf.yml files |
excludes: | array | An array of directories and filenames to be skipped when resolving includes |
defaults: | Defaults? | Defaults for this workspace |
dependencies: | array | Dependencies of the workspace to other workspaces or to cloud database providers |
integrations: | array | The integrations for this environment |
vars: | object | A map of named values for setting SQL variables from your environment Ex. -dt: dt, used in SQL as @dt, and in Jinja as |
source-locations: | array | Workspace defined by these set of files |
preprocessor: | PreprocessorType? | Experimental: This project has jinja |
Enum: IncludeType
Value | Description |
---|
model | Models are .sql files (ddls or queries) used for data transformation. |
test | Tests are expectations against the data of models. Uses the sdf-test library by default. |
check | Checks are logical expectations expresses as queries against the information schema of the SDF workspace. |
report | Reports are informational queries against the information schema of the SDF workspace. |
stat | Statistics are informational queries against the data of the models. |
resource | Resources are local data files (like parquet, csv, or json) used by models in the workspace. |
metadata | [Deprecated] Metadata are .sdf.yml files to add additional metadata. |
spec | Specifications are .sdf.yml files to configure your workspace and add additional metadata. |
macro | Macros are .jinja files that define reusable SQL code. |
seed | Seeds are models created from resources. The sdf local engine enables type checking of seeds independently of providers. Seeds can optionally be uploaded to the remote warehouse. |
slt | Slt files are tests that follow sqllogictest format |
Enum: IndexMethod
Value | Description |
---|
scan-dbt | undefined |
none | File path is not used in figuring out the full names of the tables defined in the file. Catalog and schema are inferred from the SQL statement if possible, or from the defaults section in the workspace configuration. Table name is inferred from the SQL statement if possible or from the name of the file without the extension. To figure out dependencies between tables and files, SDF will parse every file in the corresponding include path. This is the default option |
table-name | Table name is inferred from the file name without the extension. Catalog and schema are inferred from the defaults configuration. SDF assumes that a referenced table will reside in the file with the corresponding name. As a result, only the required subset of files is parsed. This option is is fast compared to the default, but it requires catalog and schemas to use the default values |
schema-table-name | Table name is inferred from the file name without the extension. Schema is inferred from the directory name in which the file resides. Catalog is inferred from the defaults configuration. SDF assumes that a referenced table will reside in the directory/file with the corresponding schema/table name. As a result, only the required subset of files is parsed. This option is also fast, and it allows the schema to vary based on the directory name, but it still requires catalog to use the default values |
catalog-schema-table-name | Table name is inferred from the file name without the extension. Schema is inferred from the directory name in which the file resides. Catalog is inferred from the directory name in which the schema directory resides. Defaults are not used. SDF assumes that a referenced table will reside in the catalog/schema/table file with the corresponding schema/catalog/table names. As a result, only the required subset of files is parsed. This option is also fast, and it provides the most flexibility as catalog, schema and table names can all vary |
Enum: Dialect
Value | Description |
---|
trino | undefined |
snowflake | Snowflake dialect with unnormalized column names |
When this dialect is specified, column names will be automatically converted to uppercase. Use this dialect if - all column names in your database are case-normalized to uppercase, and - you want the column names in your YAML files to be case-insensitive.
If your table has lower-case or mixed-case column names, then this dialect will not work — you must use [Dialect::Snowflake] instead.
Note that the difference between this dialect and [Dialect::Snowflake] is limited to non-SQL sources only (e.g. .sdf.yml
files, parquet/CSV files). For SQL files, these two dialects are identical.
NOTE: the serialized form of this dialect is snowflake
, for backward compatibility’s sake:|
snowflake_normalized
| Snowflake dialect with semantically accurate column names
When this dialect is specified, column names will be left as-is.
Users should be encouraged to use this dialect, as it is more accurate, with less potential for surprises.
Note that the difference between this dialect and [Dialect::Snowflake] is limited to non-SQL sources only (e.g. .sdf.yml
files, parquet/CSV files). For SQL files, these two dialects are identical.
NOTE: the serialized form of this dialect is snowflake_normalized
, for backward compatibility’s sake:|
Enum: PreprocessorType
Values |
---|
none |
jinja |
sql-vars |
all |
Enum: Materialization
Value | Description |
---|
table | Permanent table; e.g. for Snowflake, it will generate CREATE OR REPLACE TABLE AS ... |
transient-table | Transient tables are cost-optimized as the 90 day data retention period is not enforced. They also limit Snowflake time travel support. e.g. for Snowflake, it will generate CREATE OR REPLACE TRANSIENT TABLE AS ... |
temporary-table | Temporary table; e.g. for Snowflake, it will generate `CREATE OR REPLACE TEMPORARY TABLE AS … |
external-table | Table with pre-existing data; it is not populated by SDF e.g. for Snowflake, it will generate CREATE OR REPLACE EXTERNAL TABLE ... |
view | Permanent view (default option); e.g. for Snowflake, it will generate CREATE OR REPLACE VIEW AS ... |
materialized-view | Materialized view; e.g. for Snowflake, it will generate CREATE OR REPLACE MATERIALIZED VIEW AS ... |
incremental-table | Incremental tables are permanent tables that are gradually populated and not overwritten from scratch during every SDF run. Different SQL statements will be generated depending on whether this table already exists or not. In the latter case, an equivalent of CREATE OR REPLACE TABLE will be generated; in the former case, one of INSERT INTO ... , MERGE ... , or DELETE FROM ... , followed by INSERT INTO ... will be generated depending on the parameters of incremental-options |
snapshot-table | Snapshot tables are permanent tables that capture a snapshot of some underlying data at the time of running the query. Like incremental tables, snapshot tables are not recomputed from scratch during every SDF run |
recursive-table | Recursive tables are permanent tables whose queries can have self-references. SDF will generate an equivalent of CREATE RECURSIVE TABLE AS ... . Note: in Snowflake, recursive tables are not treated specially---a normal CREATE OR REPLACE TABLE AS... is generated for them |
other | User defined other materializations, it is up to users to define their own materialization logics |
Enum: TableCreationFlags
Value | Description |
---|
create-new | Attempts to create a new table; fails if the table already exists. |
drop-if-exists | Drops the existing table and creates a new one. |
skip-if-exists | Skips table creation if the table already exists. |
create-or-replace | Replaces the existing table with a new one. Implementation may vary by DBMS. |
create-if-not-exists | Creates the table only if it doesn’t already exist. |
Enum: SyncType
Value | Description |
---|
always | Synchronizes directory on pull and push |
on-pull | Synchronizes directory on every pull |
on-push | Synchronizes directory on every push |
never | Never synchronizes directory |
Enum: Severity
Enum: CompressionType
Value | Description |
---|
tar | undefined |
bzip2 | BZIP2 Compression (.bz2) |
gzip | GZIP Compression (.gzip) |
none | None, (default) |
Enum: ExcludeType
Value | Description |
---|
content | undefined |
path | Excludes this path, can be a glob expression |
Enum: IntegrationType
Values |
---|
data |
metadata |
database |
Enum: ProviderType
Values |
---|
glue |
redshift |
snowflake |
bigquery |
s3 |
sdf |
Enum: TablePurpose
Value | Description |
---|
report | undefined |
model | A regular table |
check | A code contract |
test | A data contract |
stat | A data report |
slt | A slt table |
system | A SDF System table/view, maintained by the sdf cli |
external-system | An External System table/view, maintained by the database system (e.g. Snowflake, Redshift, etc.) |
Enum: TableOrigin
Enum: TableLocation
Values |
---|
mirror |
local |
remote |
intrinsic |
Enum: IncrementalStrategy
Values |
---|
append |
merge |
delete+insert |
Enum: OnSchemaChange
Value | Description |
---|
fail | Fail when the new schema of an incremental table does not match the old one |
append | Keep all the columns of the old incremental table schema while appending the new columns from the new schema. Initialize all the columns that were deleted in the new schema with NULL |
sync | Delete all the deleted columns from the incremental table; Append all the new columns |
Enum: SnapshotStrategy
Enum: CheckColsSpec
Value | Description |
---|
cols | A list of column names to be used for checking if a snapshot’s source data set was updated |
all | Use all columns to check whether a snapshot’s source data set was updated |
Store table data in these formats.
Enum: Scope
Enum: Cardinality
Values |
---|
zero-or-one |
one |
zero-or-more |
Enum: Variadic
Value | Description |
---|
non-uniform | undefined |
uniform | All arguments have the same types |
even-odd | All even arguments have one type, odd arguments have another type |
any | Any length of arguments, arguments can be different types |
Enum: FunctionKind
Values |
---|
scalar |
aggregate |
window |
table |
Enum: Volatility
Value | Description |
---|
pure | Pure - An pure function will always return the same output when given the same input. |
stable | Stable - A stable function may return different values given the same input across different queries but must return the same value for a given input within a query. |
volatile | Volatile - A volatile function may change the return value from evaluation to evaluation. Multiple invocations of a volatile function may return different results when used in the same query. |
Enum: FunctionImplSpec
Value | Description |
---|
builtin | By a built-in primitive in Datafusion. (Being phased out in favor of UDFs.) |
rust | By a UDF in the sdf-functions crate. |
datafusion | By a UDF in the datafusion crate. |
sql | By a CREATE FUNCTION. (Not yet supported for evaluation.) |
Enum: SdfAuthVariant
Values |
---|
interactive |
headless |
id_token_from_file |
Nested Elements
Nested element: IncludePath
Field | Type | Description | | | | |
---|
path: | string | A filepath | | | | |
type: | IncludeType | Type of included artifacts: model | test | stats | metadata | resource |
index: | IndexMethod | Index method for this include path: scan | table | schema-table | catalog-schema-table | |
defaults: | Defaults? | Defaults for files on this path | | | | |
Nested element: Defaults
Field | Type | Description |
---|
environment: | String? | The default environment (can only be set on the level of the workspace) |
dialect: | Dialect? | The dialect of this environment. If not set, defaults to trino |
preprocessor: | PreprocessorType? | The preprocessor for this environment. If not set, defaults to local |
catalog: | String? | Defines a default catalog. If not set, defaults to the (catalog/workspace) name in an outer scope |
schema: | String? | Defines a default schema, If not set, defaults to the schema name in an outer scope, if not set, defaults to ‘pub’ |
materialization: | Materialization? | Defines the default materialization, if not set defaults to materialization in outer scope, if not set defaults to base-table |
creation-flag: | TableCreationFlags? | Defines table creation flags, defaults to if not set |
utils-lib: | string? | The default utils library, if set overrides sdf_utils |
test-lib: | string? | The default test library, if set overrides sdf_test |
materialization-lib: | string? | The default materialization library, if set overrides sdf_materialization |
linter: | string? | The named lint rule set, uses defaults (from sdftarget/<environment>/lint.sdf.yml) if not set |
index-method: | IndexMethod? | The default index for this tables |
include-type: | IncludeType? | The default index for this tables |
sync-method: | SyncType? | The default index for this tables |
severity: | Severity? | The default severity for this tables tests and checks |
csv-has-header: | boolean? | CSV data has a header [only for external tables] |
csv-delimiter: | string? | CSV data is separated by this delimiter [only for external tables] |
csv-compression: | CompressionType? | Json or CSV data is compressed with this method [only for external tables] |
Nested element: String
Nested element: ExcludePath
Field | Type | Description |
---|
path: | string | A filepath |
exclude-type: | ExcludeType? | Type of excluded artifacts |
Nested element: Dependency
Field | Type | Description |
---|
name: | String | |
path: | string? | The relative path from this workspace to the referenced workspace, for a Git repo, from the root of the depot to the workspace |
environment: | string? | The chosen workspace environment (none means default) |
target: | string? | The chosen workspace target (none means default) |
git: | string? | The Git repo |
rev: | string? | the Git revision (choose only one of the fields: rev, branch, tag) |
branch: | string? | the Git branch (choose only one of the fields: rev, branch, tag) |
tag: | string? | the Git tag (choose only one of the fields: rev, branch, tag) |
imports: | array | Which models, reports, tests, checks etc. to include from the dependency |
Nested element: Integration
Field | Type | Description |
---|
type: | IntegrationType | The type of the integration [e.g.: database, metadata, data] |
provider: | ProviderType | The type of the provider [e.g.: snowflake, redshift, s3] |
credential: | string? | Credential identifier for this provider |
cluster-identifier: | string? | The cluster identifier for redshift server |
batch-size: | integer? | The size of the batch when querying the provider |
sources: | array | A list of (possibly remote) sources to read, matched in order, so write specific pattern before more general patterns |
targets: | array | A list of (possibly remote) targets to build, matched in order, so write specific pattern before more general patterns, source patterns are excluded |
buckets: | array | A list of remote buckets to target |
output-location: | string? | The remote output location of the integration |
Nested element: SourcePattern
Field | Type | Description |
---|
pattern: | string | A source that can be read. Sources must be a three part names with globs, eg. ..* matches all catalogs, schema and table in scope |
time-travel-qualifier: | string? | Time travel qualifier expression (e.g. undefinedAT (TIMESTAMP => {{ SOME_TIMESTAMP }})undefinedundefined) |
preload: | boolean? | Whether to preload the source |
rename-from: | string? | Renames sources when searching in the remote, the ith {i} matches the ith * of the name, so to prepend all catalogs,schema,table with _, use "_{1}._{2}._{3}“ |
Nested element: TargetPattern
Field | Type | Description |
---|
pattern: | string? | A pattern must be a three part names with globs, eg. ..* matches all catalogs, schema and table in scope |
patterns: | array | A list of patterns. A pattern must be a three part names with globs, eg. ..* matches all catalogs, schema and table in scope |
preload: | boolean? | Whether to preload the target |
rename-as: | string? | Renames targets, the ith {i} matches the ith * of the name, so to prepend all catalogs,schema,table with _, use "_{1}._{2}._{3}“ |
Nested element: DataBucket
Field | Type | Description |
---|
uri: | string | The uri of the bucket |
region: | string? | The region of the bucket |
Nested element: FilePath
Field | Type | Description |
---|
path: | string | A filepath |
Nested element: SystemTime
Field | Type | Description |
---|
secs_since_epoch: | integer | |
nanos_since_epoch: | integer | |
Nested element: Constant
Nested element: DbtConfig
Field | Type | Description |
---|
enabled: | boolean | Whether the dbt integration is enabled |
project-dir: | string? | Dbt Project Directory |
target-dir: | string? | The dbt target directory for the project |
profile-dir: | string? | The directory where dbt profiles are stored |
profile: | String? | The dbt profile to use |
target: | String? | The dbt target for the profile |
auto-parse: | boolean | Automatically Run Parse in-between commands (default: true) |
Nested element: IncrementalOptions
Field | Type | Description |
---|
strategy: | IncrementalStrategy | Incremental strategy; may be one of undefinedappendundefined, undefinedmergeundefined, or undefineddelete+insertundefined |
unique-key: | string? | Expression used for identifying records in Merge and Delete+Insert strategies; May be a column name or an expression combining multiple columns. If left unspecified, Merge and Delete+Insert strategies behave the same as Append |
merge-update-columns: | array | List of column names to be updated as part of Merge strategy; Only one of merge_update_columns or merge_exclude_columns may be specified |
merge-exclude-columns: | array | List of column names to exclude from updating as part of Merge strategy; Only one of merge_update_columns or merge_exclude_columns may be specified |
on-schema-change: | OnSchemaChange? | Method for reacting to changing schema in the source of the incremental table Possible values are undefinedfailundefined, undefinedappendundefined, and undefinedsyncundefined. If left unspecified, the default behavior is to ignore the change and possibly error out if the schema change is incompatible. undefinedfailundefined causes a failure whenever any deviation in the schema of the source is detected; undefinedappendundefined adds new columns but does not delete the columns removed from the source; undefinedsyncundefined adds new columns and deletess the columns removed from the source; |
compact-mode-warehouse: | string? | Warehouse to use in the incremental mode |
Nested element: SnapshotOptions
Field | Type | Description |
---|
strategy: | SnapshotStrategy | Snapshot strategy; may be one of undefinedtimestampundefined (default), or undefinedcheckundefined |
unique-key: | string | Expression used for identifying records that will be updated according to the snapshot strategiy; May be a column name or an expression combining multiple columns |
updated-at: | String? | Name of the timestamp column used to identify the last update time This option is only required for the undefinedtimestampundefined snapshot strategy |
check-cols: | CheckColsSpec? | Specification of which columns to check for change (may be a list of column names or undefinedallundefined) This option is only required for the undefinedcheckundefined snapshot strategy |
compact-mode-warehouse: | string? | Warehouse to use in the snapshot mode |
Nested element: Column
Field | Type | Description |
---|
name: | String | The name of the column |
description: | string | A description of this column |
datatype: | string? | The type of this column |
classifiers: | array | An array of classifier references |
lineage: | Lineage? | Lineage, a tagged array of column references |
forward-lineage: | Lineage? | Forward Lineage, the columns that this column is used to compute |
reclassify: | array | Array of reclassify instructions for changing the attached classifier labels |
samples: | array | An array of representative literals of this column [experimental!] |
default-severity: | Severity | The default severity for this tables tests and checks |
tests: | array | |
Nested element: Lineage
Field | Type | Description |
---|
copy: | array | The output column is computed by copying these upstream columns |
modify: | array | The output column is computed by transforming these upstream columns |
scan: | array | These upstream columns are indirectly used to produce the output (e.g. in WHERE or GROUP BY) |
apply: | array | These functions were used to produce the output column |
Nested element: Reclassify
Field | Type | Description |
---|
to: | string? | Target classifier |
from: | string? | Expected source classifier |
Nested element: Constraint
Field | Type | Description |
---|
expect: | string | The constraint macro: must have the form lib.macro(args,..), where lib is any of the libs in scope, std is available by default |
severity: | Severity? | The severity of this constraint |
Nested element: Partition
Field | Type | Description |
---|
name: | String | The name of the partition column |
description: | string? | A description of the partition column |
format: | string? | The format of the partition column [use strftime format for date/time] See (guide)[https://docs.sdf.com/guide/schedules] |
Nested element: Label
Field | Type | Description |
---|
name: | string | The name of the label, use ”*” to allow arbitrary strings as labels |
description: | string? | A description of this classifier element |
Nested element: Parameter
Field | Type | Description |
---|
name: | String? | The name of the parameter |
description: | string? | A description of this parameter |
datatype: | string? | The datatype of this parameter |
classifier: | [string]? | An array of classifier references |
constant: | string? | The required constant value of this parameter |
identifiers: | [string]? | The parameter may appear as identifier, without quote |
Nested element: OptionalParameter
Field | Type | Description |
---|
name: | String | The name of the parameter |
description: | string? | A description of this parameter |
datatype: | string | The datatype of this parameter |
classifier: | [string]? | An array of classifier references |
constant: | string? | The required constant value of this parameter |
identifiers: | [string]? | The parameter may appear as identifier, without quote |
Nested element: TypeBound
Field | Type | Description |
---|
type-variable: | String | |
datatypes: | array | |
Nested element: Example
Field | Type | Description |
---|
input: | string | The sql string corresponding to the input of this example |
output: | string | The output corresponding to running the input string |
Nested element: RustFunctionSpec
Field | Type | Description |
---|
name: | string? | The name attribute of the implementing UDF. None indicates the UDF is named the same as the function. |
Nested element: DataFusionSpec
Field | Type | Description |
---|
udf: | String? | The name attribute of the implementing UDF. None indicates the UDF is named the same as the function. |
Nested element: Config
Field | Type | Description |
---|
name: | string | The name of the configuration section |
description: | string? | A description of this configuration section |
properties: | object? | |
Nested element: HeadlessCredentials
Field | Type | Description |
---|
access_key: | string | |
secret_key: | string | |