Create Job
Estimated reading: 7 minutes
83 views
Contents
Overview
Jobs are a sequence of tasks that can be chained together to enable the automation of business-critical tasks such as data acquisition, analysis, and reporting. You can execute a job instantly or schedule it for execution. You can view the status of an executing job or see its past execution history.
Create New Job
The process of creating Job
- To create a job navigate to Automation Jobs .
- All Jobs, created by you or shared with you will be displayed.
Note
- Use the search option to look for any existing Jobs.
- Use the Filter by Mode option to filter the Jobs based on the modes.
- To switch the view between Card View and List View, click on the respective icons near to the search bar.
- To create a New Folder in Jobs, click on the Folder icon.
- To download the list of all existing Jobs, click on the Download icon. It delivers the details on (• Entity Id Name • Description • Content Tags • Created By • Created By User • Created On • Folder • Datasource Id • Datasource Name • Is Folder Public)
- Jobs with the Production tags indicate that they are currently in production.

Note
- In order to save a job in Infoveave, it is important to ensure that the job has a name, description, and at least one task added to it.
- To create new job, click on New Job. It will redirect you to the Job Configuration panel.
- Click on task type title to view all tasks available for job as a sequence of tasks.
- Drag and drop the required tasks from the task panel onto the job designer canvas. Each task represents a specific action or process that needs to be executed as part of the job.
- Connect the tasks together by linking the output of one task to the input of another task. This defines the flow and sequence in which the tasks should be executed.
- Use the mini map to navigate the job designer canvas.
- Provide a meaningful Job Name and Description in the Job Setup Panel. These details help to identify and understand the purpose of each task within the job.
- Configure the job’s behavior upon completion by selecting an appropriate option from the “On Job Completion” dropdown menu. This determines what action should be taken after the job finishes executing.
- Optionally, enable the “Continue if Job Fails” checkbox to specify whether the job should proceed with the remaining tasks even if one or more tasks encounter errors or failures.
- Define a time limit for the job to complete using the “Job Timeout” option. If the job exceeds this specified duration, it will be automatically terminated.
- Choose whether to send a summary report after the job execution by selecting the “Send Job Summary” checkbox. This report provides an overview of the job’s execution details.
- Select the appropriate summary report (query report) from the available options in the dropdown menu.
- Specify the recipient(s) who should receive the summary report for review and analysis.
- Optionally, enable the “Send Summary Only on Failure” option to receive the summary report only if the job encounters any failures. This helps in monitoring and troubleshooting the job execution effectively.

Sample: Data Upload Job Configuration
Sample
Consider we are creating a job named ‘Process Sales Data’ to fetch a few columns from a master datasource named “Sales” and uploading them to a new datasource, “Product Details“.
The tasks involved are,
               
The steps involved to create the job are,
Step 1:
The tasks involved are,
               
The steps involved to create the job are,
Step 1:
-
- Create a task to clean the cache for the master datasource “Sales” to add new data on every upload.
-
- On the successful execution of the clean cache task, the query execution task will initiate, and the task will fetch the required columns from the master datasource.
-
- On the success of fetching the columns, the final task of uploading the data to the new datasource will execute.
- You will receive a success notification of the successful execution of the job.
- You will receive a failure/ error notification if the job fails in any step.
- On the success of fetching the columns, the final task of uploading the data to the new datasource will execute.
-
- You have the SQL query created on the master datasource to fetch the columns.
- You have created a new datasource with the fetched columns and define the Measures and Dimensions for the data to be uploaded after the Job execution.