Skip to main content
Version: 2.12.X

Preparing Your File

Once you've established a place to put your data in Cogynt, the next step is to prepare your file for processing. Here, you'll tell Cogynt how to recognize and read your data by:

  1. Providing your data in a JSON file.
  2. Analyzing the file for relevant fields.
  3. Creating a schema to confirm the fields to use.

These procedures are described in greater detail under the corresponding topic headings.

Upload a JSON File

Cogynt's Data Management Tool provides several options to easily upload data into Kafka. For the purpose of this guide, you'll use the tool's GUI to upload a JSON to Kafka.

To upload your JSON file:

  1. Open Data Management Tool.
  2. On the landing page, click Data File Upload.
  3. From the Select Project dropdown menu, select the project you created earlier.
  4. From the Select Topic dropdown menu, select the topic you created earlier.
  5. From the Select Deployment Target dropdown menu, select the deployment target you created earlier.
  6. Under Data File, either drag your JSON file to the space indicated on the page, or click the space to browse for and select your JSON file.
  7. Inspect the preview to confirm that the uploaded file matches what you expected.
  8. Click Upload.
  9. In the Confirm Data File Upload dialog, click Upload.

Discover a Schema

Cogynt Authoring's Schema Discovery utility expedites the creation of models by providing data schemas that are pre-populated with relevant values based on your data. In this step, you'll use the utility to analyze your data. The utility will then suggest possible schemas for you to use.

To discover a schema using Schema Discovery:

  1. Click the system panel () icon.
  2. In the Other Tasks menu, click Schema Discovery.
  3. In the Schema Discovery window:
    1. From the Project dropdown menu, select your project. (The current project is selected by default.)
    2. From the Kafka Broker dropdown menu, select the Kafka broker for the external Kafka namespace where your data is stored.
    3. In the Host field, specify the host of the Kafka cluster where your data is stored.
    4. In the Port field, specify the port of the Kafka cluster where your data is stored.
    5. Click Discover.

Once it has processed your data, Authoring returns a list of discovered topics. Next, you'll use that list to create your schema and tell Cogynt how to recognize and deal with your data.

Create a Schema

The Discovered Topics list provides all the necessary tools to put together a schema of your own.

To create a schema using the list of discovered topics:

  1. In the Discovered Topics list, click a relevant topic. The Data Schema window opens, pre-populated with the topic's data schema.
  2. In the Data Schema window:
    1. Verify the information in the Name and Path fields for each schema item. If necessary, enter new values in each field. Check the Sync path to name checkbox if you want both fields to contain identical entries as you type.
    2. Click Sample Data to display or hide what a sample of what each schema item's data looks like.
    3. Click Add Field (+) to add a new field. For more information on adding fields, refer to Adding Fields to User Data Schemas in the Cogynt Authoring User Guide.
    4. Click the Delete (trash can) button on the right of each unnecessary schema item to remove it.
    5. Click and drag the dotted drag handle on the right of field to reposition it in the list of fields.
    6. When finished, click Create User Data Schema to save the schema, or click Clear to discard the schema. (Note: The Create User Data Schema button is enabled only if a project is selected.)
  3. Repeat steps 1-2 as needed for each relevant topic in the Discovered Topics list.

Once your schema is ready, all of the file preparations are complete. The next task is to define the logic that Cogynt should use to process your data.