Simon ingests and unifies your data from anywhere and we want to share that data back with you. By partnering with [Snowflake Data Sharing](🔗), we give you read-only access to the Simon results share. This is secure sharing of data where no data is copied or transferred between accounts (so it doesn't affect anything you have running in Simon). Using this shared data, you can query the views in your Business Intelligence (BI) tool directly alongside any of your other tables and report on it.

You can think of this function as a mini data warehouse to use for marketing analytics. With this "warehouse" you can:

  • Measure the results of your Simon powered experiment in your BI tool

  • Analyze data and create visualizations of marketing engagement data to understand the effectiveness of your marketing program

  • Give other teams and end users at your company flexible access to Simon flow membership, and engagement data (like opens, clicks, etc)

  • Access raw third party SaaS tool data that is collected by Simon Data

Sharing Data Across Regions and Cloud Platforms

Simon Snowflake Share supports cross-region shares but is limited to AWS and Azure.

## When to use Snowflake Shares

You can also use data from data feed exports and segment exports, so when should you choose Snowflake shares instead?

Snowflake ShareData FeedsSegment Exports
An outbound share from Simon into our your Snowflake account. This lets us pass on any data tables that are products of Simon (for example: flow adds, event tables, customer table etc).These take a snapshot in time of contacts who are going through a flow. You can also add custom context to the data feed to send through attributes at the time of send (For example: LTV, product they viewed, price, etc.)Segment exports will export the contacts in a segment at that moment to your local machine. We can also pass custom context through this method.
Sharing data via Snowflake Share makes it simple for you to get access to multiple Simon data products. The **most common use case** for a Snowflake Share is reporting. We can share: _ data feeds _ flow adds _ flow actions _ the Simon customer table \* any contact fact datasets via a Snowflake Share so you can directly query on that data or connect it to a BI tool.Data feeds can be dropped in [S3, SFTP](🔗) or they can also be loaded into Snowflake and shared with you there. **You might choose a data feed** if your preferred method is S3 or SFTP and you're only concerned with flows.**The most common use case** is to test a segment audience before sending them through a flow. You could [export a segment list](🔗) and check that all users are supposed to be there.

## Tables

At a high level, Simon's Results share consists of the following views:

  • Identity join (Legacy Identity clients only)

  • Identity event join

  • Event

  • All contact facts results events configured through Datasets or automatically by Simon

  • Customer table join

  • Metadata tables on flows/journeys and actions within flows

Custom datasets and fields

Many fields are custom configured to your dataset. You can also export these fields via a Snowflake share, however those fields aren't explained in this document because they're unique to each client. Use the [Data Explorer](🔗) and [Data Feeds](🔗) or contact your account manager for assistance understanding those tables.

### Identity

Batch identity table that creates consistent Simon IDs per customer across join keys (for example, `user_id`). This is used to join a customer across `client_id, email`, and `users_ids`.

  • Updated three times per day

ColumnData typeDescription
Native IDVarcharIdentifier for given contact in the given native identifier type
Native identifier (e.g. email, client_id, user_id)VarcharIdentifier type
Simon IDVarcharJob is implemented as full batch job and `simon_ids` will not necessarily be consistent from one run to the next. Joins to `results_simon_id` in `customer table`

### Events

Events configured for use within Segmentation and Results as conversion goals and milestones (for example: Sendgrid sends and purchases). Events are any action a contact may have taken.

  • Updated hourly (or more given data source)

Simon Event Tables

  • `Flow_actions`: Log of all actions taken by Simon via Flows

  • `Flow_adds`: Log of contacts being added to Simon Flows.

ColumnData typeDescription
Native IDVarcharEvent contact identifier
Native identifier (e.g. email, client_id, user_id)VarcharIdentifier type
TSIntegerTimestamp for event, in [epoch](🔗) seconds
`event_type`VarcharEvent name, defined in Datasets
`Event_properties` (can be 0 to n)VarcharDimensional fields

### Metadata

These tables are under active development and may be helpful for joining in additional metadata to some of the tables above. These tables may be labeled with `Alpha`.

  • Updated daily

  • `Metadata_simon_flows`: provides configuration information on each flow

ColumnData typeDescription
`Campaign_id`NumberFlow ID
`Campaign_name`VarcharFlow name
`Journey_name`VarcharJourney name
`Current_status`VarcharFlow status (live, stopped, archived)
`Campaign_category`VarcharFlow type (one-time, recurring, triggered, event triggered, stream)
`Holdout_size`FloatIf there is an experiment, holdout size (1 = 100% holdout)
`Has_experiment`BooleanCan be ignored, use `has_experiment` column
`has_mve2_expt`BooleanCan be ignored, use `has_experiment` column
`Experiment_name`VarcharExperiment name, if applicable
`Experiment_id`NumberExperiment ID, if applicable
`Variant_id`NumberFlow ID name, if experiment
`Variant_name`VarcharFlow variant name, if experiment
`Launch_date`VarcharDate string of when flow was scheduled to launch
`Launched_at`NumberWhen the flow was scheduled to launch, in epoch seconds
`Segment_id`VarcharSegment ID associated with the flow
`Segment_name`VarcharSegment name associated with the flow
`Ended_at`NumberWhen the campaign was stopped, in epoch seconds (_Null_ if flow is running)
`Created_at`NumberWhen the flow was initially created, in epoch seconds
`Updated_at`NumberWhen the flow was last updated, in epoch seconds
  • `Campaign_action_metadata`: Join table showing the details behind any flow action (e.g. integration). Campaign actions are the flow actions in a flow. Each flow action maps one-to-one with an action type (e.g. triggered send) and integration (e.g. sendgrid)

  • Updated daily

ColumnData typeDescription
`Campaign_action_id`NumberFlow action ID
`Campaign_id`NumberFlow ID for the flow action this is a part of
`Integration_name`VarcharIntegration name linked to the flow action
`Action_name`VarcharAction name (for example, `triggered_send`)
`Created_at`NumberWhen the flow action was initially created, in epoch seconds
`Updated_at`NumberWhen the flow action was last updated, in epoch seconds
`Action_config`VariantFlexible variant data structure that includes data from templates, custom context, or channel configuration

## How do I access and use this data?

For detail on how to use Snowflake Share itself, please refer to Snowflake's [documentation](🔗).

### Existing Snowflake account

Contact your account manager to set up a share. You'll be given client access, then refer to Snowflake's [documentation](🔗) for more detail on how to use Snowflake.

### No Snowflake account

If you don't currently use Snowflake, no problem, we can set you up with a sub-account so you can still view and query on your data. Contact your account manager to set this up, then refer to Snowflake's [documentation](🔗) for more detail on how to use Snowflake.

What we'll give you:

  • A URL to log in to and run queries in your sub-account.

  • A database you can create tables/views with and upload data to. You can store QA views that reference the share database _and_ you can also upload other data into Snowflake then join with data from our share to perform additional analysis and reporting. (PROD/DEV schemas are generated by default).

  • Roles we can assign your users to (we'll need their email address)

  • Access to run QA queries with the data we provide you

We set up the following roles (and correlated permissions) that you can request for individual users:

Role Type:ReaderPowerIT Admin
Query both databases
Generate [stored procedures](🔗)
Create [stages](🔗)
[Create/modify](🔗)/[drop](🔗) tables or views
Create [tasks](🔗)