We support publishing data feeds to an Amazon S3 bucket or SFTP endpoint. This allows you to export contacts as they enter (or exit) a flow, and record the occurrence of other events. The file path, type, naming, schema, control protocol, and optional conditions for Simon data feeds are broken out below.
- When setting up a new flow, scroll to the bottom of the page to section titled "Configure Data Feed (Optional).
- Select "Yes".
- Fill in the setup fields and save or launch the flow.
Select the format of the data export from the following options:
Optionally, select the checkbox if you want a contact's identity information to be hashed using a SHA-256.
Select S3 or FTP.
Enter the name of the S3 bucket or SFTP endpoint you wish to write to. Do not include the scheme (e.g. S3://) or a trailing slash (/) in the name.
Enter the file path you wish to write to. Do not include a beginning or trailing slash (/) in the name. Files are uploaded to the following path:
Add fields to the data feed in addition to the default values in the schema. Add a name and value using the same steps to setup Custom Contexts.
Two types of files are produced: data files and control files.
Data files are named by the
client_flow_operation_random_YmdHMS with an extension that indicates the file type (either ".csv" or ".jsonl").
The file will include, for each contact, one entry for the data feed and one for each applicable flow action.
Time of export, with format: Y-M-D HH:MM:SS (e.g. "2017-03-14 17:54:13")
Name of the segment
Name of the flow
Flow action associated with the entry
Experiment variant (null if there is not one)
Type of operation performed (either 'add' or 'remove')
Custom Contexts (optional)
Control files are named identically to their corresponding data file with the prefix
control_ and are always JSON files. They contain a single JSON object with the following structure:
Reserved for future use; always true
Reserved for future use; always 1
A single-element array of metadata; see below
The files array will contain a single JSON object that provides information about the corresponding data file. It contains:
The specified bucket
The specified path
Either "csv" or "jsonl"
Full URL of data file
The number of rows in the data file
The ID of the flow
The name of the flow
Currently this produces one control file for every data file, but in the future it may produce a single control file for many data files.
Updated 15 days ago