Top Banner
Informatica ® Cloud Data Integration May 2022 What's New
33

Informatica Cloud Data Integration - May 2022 - What's New

May 07, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Informatica Cloud Data Integration - May 2022 - What's New

Informatica® Cloud Data IntegrationMay 2022

What's New

Page 2: Informatica Cloud Data Integration - May 2022 - What's New

Informatica Cloud Data Integration What's NewMay 2022

© Copyright Informatica LLC 2016, 2022

This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License.

Informatica, Informatica Cloud, Informatica Intelligent Cloud Services, PowerCenter, PowerExchange, and the Informatica logo are trademarks or registered trademarks of Informatica LLC in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://www.informatica.com/trademarks.html. Other company and product names may be trade names or trademarks of their respective owners.

Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product.

The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at [email protected].

Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.

Publication Date: 2022-06-01

Page 3: Informatica Cloud Data Integration - May 2022 - What's New

Table of Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Informatica Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Informatica Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Informatica Intelligent Cloud Services web site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Informatica Intelligent Cloud Services Communities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Informatica Intelligent Cloud Services Marketplace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Data Integration connector documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Informatica Knowledge Base. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Informatica Intelligent Cloud Services Trust Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Informatica Global Customer Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Chapter 1: May 2022. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7New features and enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Email verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Intelligent structure models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Taskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Pushdown optimization preview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Platform REST API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Source control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Hosted Agent support for connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Enhanced connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Changed behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Chapter 2: April 2022. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12New features and enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Data Integration Elastic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Expression autocomplete. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Flat files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Mapplet transformation names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Parameter files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Pushdown optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

SQL connection parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Taskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Intelligent structure models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Data Integration REST API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Platform REST API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Changed behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Configuring advanced attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Table of Contents 3

Page 4: Informatica Cloud Data Integration - May 2022 - What's New

Taskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

File listener. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

New connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Enhanced connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Changed behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Chapter 3: Upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Preparing for the upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Post-upgrade tasks for the May 2022 release. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Date and Int96 data types in Avro and Parquet files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Post-upgrade tasks for the April 2022 release. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

TLS 1.0 and 1.1 disablement for the Secure Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Amazon Redshift V2 Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Amazon S3 V2 Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Amazon S3 bucket policy for elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Connection with TLS 1.0 or 1.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Databricks Delta Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Elastic clusters in an AWS environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

Flat files with UTF-8-BOM encoding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

Microsoft Azure Synapse SQL Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

Microsoft SQL Server Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

SAP Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

SSE-KMS encryption for elastic mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

File Integration Service proxy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Chapter 4: Enhancements in previous releases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4 Table of Contents

Page 5: Informatica Cloud Data Integration - May 2022 - What's New

PrefaceRead What's New to learn about new features, enhancements, and behavior changes in Informatica Intelligent Cloud Services℠ Data Integration for the May 2022 release. You can also learn about upgrade steps that you might need to perform.

Informatica ResourcesInformatica provides you with a range of product resources through the Informatica Network and other online portals. Use the resources to get the most from your Informatica products and solutions and to learn from other Informatica users and subject matter experts.

Informatica DocumentationUse the Informatica Documentation Portal to explore an extensive library of documentation for current and recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.

If you have questions, comments, or ideas about the product documentation, contact the Informatica Documentation team at [email protected].

Informatica Intelligent Cloud Services web siteYou can access the Informatica Intelligent Cloud Services web site at http://www.informatica.com/cloud. This site contains information about Informatica Cloud integration services.

Informatica Intelligent Cloud Services CommunitiesUse the Informatica Intelligent Cloud Services Community to discuss and resolve technical issues. You can also find technical tips, documentation updates, and answers to frequently asked questions.

Access the Informatica Intelligent Cloud Services Community at:

https://network.informatica.com/community/informatica-network/products/cloud-integration

Developers can learn more and share tips at the Cloud Developer community:

https://network.informatica.com/community/informatica-network/products/cloud-integration/cloud-developers

Informatica Intelligent Cloud Services MarketplaceVisit the Informatica Marketplace to try and buy Data Integration Connectors, templates, and mapplets:

5

Page 6: Informatica Cloud Data Integration - May 2022 - What's New

https://marketplace.informatica.com/

Data Integration connector documentationYou can access documentation for Data Integration Connectors at the Documentation Portal. To explore the Documentation Portal, visit https://docs.informatica.com.

Informatica Knowledge BaseUse the Informatica Knowledge Base to find product resources such as how-to articles, best practices, video tutorials, and answers to frequently asked questions.

To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or ideas about the Knowledge Base, contact the Informatica Knowledge Base team at [email protected].

Informatica Intelligent Cloud Services Trust CenterThe Informatica Intelligent Cloud Services Trust Center provides information about Informatica security policies and real-time system availability.

You can access the trust center at https://www.informatica.com/trust-center.html.

Subscribe to the Informatica Intelligent Cloud Services Trust Center to receive upgrade, maintenance, and incident notifications. The Informatica Intelligent Cloud Services Status page displays the production status of all the Informatica cloud products. All maintenance updates are posted to this page, and during an outage, it will have the most current information. To ensure you are notified of updates and outages, you can subscribe to receive updates for a single component or all Informatica Intelligent Cloud Services components. Subscribing to all components is the best way to be certain you never miss an update.

To subscribe, go to https://status.informatica.com/ and click SUBSCRIBE TO UPDATES. You can then choose to receive notifications sent as emails, SMS text messages, webhooks, RSS feeds, or any combination of the four.

Informatica Global Customer SupportYou can contact a Customer Support Center by telephone or online.

For online support, click Submit Support Request in Informatica Intelligent Cloud Services. You can also use Online Support to log a case. Online Support requires a login. You can request a login at https://network.informatica.com/welcome.

The telephone numbers for Informatica Global Customer Support are available from the Informatica web site at https://www.informatica.com/services-and-training/support-services/contact-us.html.

6 Preface

Page 7: Informatica Cloud Data Integration - May 2022 - What's New

C h a p t e r 1

May 2022The following topics provide information about new features, enhancements, and behavior changes in the May 2022 release of Informatica Intelligent Cloud Services℠ Data Integration.

New features and enhancementsThe May 2022 release of Informatica Intelligent Cloud Services℠ Data Integration includes the following new features and enhancements.

Email verificationInformatica Intelligent Cloud Services verifies user email addresses.

When you update the email address in your user profile, Informatica Intelligent Cloud Services sends a verification email to the new email address. The new email address is verified after you click the link in the verification email.

If your email address has never been verified, an alert message appears at the top of your user profile. To verify your email address, click the "Send verification email" link in the alert message. The email address is verified after you click the link in the verification email.

For more information about your user profile, see Getting Started.

Intelligent structure modelsThis release includes the following enhancements to intelligent structure models:

Search for nodes in a model by data type

You can search for nodes in a model by data type. When you search by data type, the data type list is divided into basic and semantic data types and contains all the data types that are used in the model.

Create models from complex files that contain textual hierarchies

You can use a complex file that contains textual hierarchies as a sample file to base the model on.

For more information about intelligent structure models, see Components.

7

Page 8: Informatica Cloud Data Integration - May 2022 - What's New

TaskflowsThis release includes the following enhancements to taskflows:

Support for parameter set

You can assign a parameter set to provide values for input parameters in taskflows, mapping tasks, Subtaskflow steps, and Data Task steps at design time.

Parameter sets are the parameter files that contain sections and parameters at the taskflow level. You can upload parameter files to the Informatica managed cloud-hosted repository using the ParamSetCli utility. The uploaded parameter files are known as parameter sets that you can use in a taskflow. ParamSetCli is Informatica's command line interface utility that enables you to upload, download, and delete a parameter set and list the parameter sets within the cloud-hosted repository. To use the ParamSetCli utility with proxy settings, you must have the Secure Agent installed on the same machine as the ParamSetCli utility.

You can also use the RunAJob utility to provide values for the taskflow input parameters using a parameter set and run the taskflow. The taskflow reads the parameter values from the parameter set.

To learn how to configure and run a taskflow with a parameter set, see the following video:

https://www.youtube.com/watch?v=zDPYS9e0ryM

For more information about the parameter set and the ParamSetCli utility, see Taskflows and Rest API Reference.

Bulk publish for taskflows

You can use the publish resource to publish multiple taskflows simultaneously.

To publish multiple taskflows simultaneously, use a POST request with the following URL:

<Informatica Intelligent Cloud Services URL>/active-bpel/asset/v1/publish

For more information about using the publish resource, see Taskflows.

Pushdown optimization previewWhen you create a mapping that is configured for pushdown optimization, you can preview the SQL query that Data Integration pushes to the database on the Pushdown Optimization panel.

For more information about running a pushdown preview job, see Mappings.

Platform REST APIThis release includes the following enhancements to the Informatica Intelligent Cloud Services platform REST API.

Create, update, and delete projects and folders

You can create, update, and delete projects and folders using the projects and folders REST API version 3 resources.

Manage permissions for assets, projects, and folders

You can create, update, and delete user and user group permissions for assets, projects, and folders. You can also check permissions for a particular object or asset type. Use the objects REST API version 3 resource to manage object permissions.

Include multiple assets in a pull request

You can pull multiple assets in one pull request instead of using individual pull requests for each asset.

8 Chapter 1: May 2022

Page 9: Informatica Cloud Data Integration - May 2022 - What's New

Update users and roles in a user group

You can add users and roles to a user group or remove users and roles from a user group using the userGroups REST API version 3 resource.

Update privileges for custom roles

You can add and remove privileges for custom roles using the roles REST API version 3 resource.

Verify the checksum of the Secure Agent installation program

You can verify the checksum of the Secure Agent installation program using the agent REST API version 2 resource.

For more information, see REST API Reference.

Source controlThis release includes the following enhancements to source control.

Undo the checkout of projects and folders

If you undo the checkout of a project or folder, you can select which objects within the project or folder to include or exclude.

Unlink objects from source control

You can unlink an object that's checked out by another user if you have the Admin role or your user role has the Force Undo Checkout privilege for the Administrator service.

For more information about source control, see Asset Management in Data Integration or Organization Management in Administrator.

ConnectorsThe May 2022 release includes the following enhanced connectors.

Hosted Agent support for connectorsEffective in the May 2022 release, you can use the Hosted Agent to run mappings with the following connectors:

• Coupa Connector

• Cvent Connector

• JIRA Connector

• NetSuite Connector

• OData Connector

• Xactly Connector

• Zuora AQuA Connector

Connectors 9

Page 10: Informatica Cloud Data Integration - May 2022 - What's New

Enhanced connectorsThis release includes enhancements to the following connectors.

Amazon DynamoDB V2 Connector

This release includes the following enhancements for Amazon DynamoDB V2 Connector:

• Informatica lifted the Amazon DynamoDB V2 Connector from technical preview.

• You can configure the AWS tags target property to identify DynamoDB target tables.

• You can use an existing DynamoDB target object to write data to an Amazon DynamoDB table. When you select an existing target object, the schema is inferred from the incoming fields to the target transformation.

Coupa V2 Connector

If an error occurs within the row when you run a web service, the mapping assigns the fault information to the fault group.

Google BigQuery V2 Connector

You can configure a Google BigQuery V2 connection to create a staging file, staging table, or staging view with a unique name.

Google Cloud Spanner Connector

You can configure partitioning in elastic mappings to read data from Google Cloud Spanner sources.

Google Cloud Storage V2 Connector

You can use Informatica encryption for mappings.

Google Drive Connector

This release includes the following enhancements for Google Drive Connector:

• You can use the Folder_FilesGetAll object in a mapping to read from files available within a folder based on the folder ID you specify in the filter. The Folder_FilesGetAll object is applicable for files created outside of Google Drive.

• You can configure a filter condition to get a list of files based on the date and time when the file was created or modified.

• You can configure a mapping to download a file from Google Drive based on the file ID and the file path you specify in the source properties. You can also specify a file path in the target properties to upload a file to Google Drive.

Kafka Connector

This release includes the following enhancements for Kafka Connector:

• You can configure to read messages from a Kafka broker in real-time or in batches.

• When you configure a mapping to read data from a Kafka topic in real-time, you can configure Informatica partitions to optimize the performance for the mapping task.

Microsoft Azure Cosmos DB SQL API Connector

This release includes the following enhancements for Microsoft Azure Cosmos DB SQL API Connector:

• You can create and run elastic mappings to read from and write to JSON files in Microsoft Azure Cosmos DB SQL API.

• You can use an elastic mapping to read hierarchical data types from JSON files.

10 Chapter 1: May 2022

Page 11: Informatica Cloud Data Integration - May 2022 - What's New

Microsoft Azure Data Lake Storage Gen2 Connector

You can use mappings and mapping tasks to read data from and write data to a fixed-width flat file.

Snowflake Data Cloud Connector

This release includes the following enhancements for Snowflake Data Cloud Connector:

• When you use a Snowflake Data Cloud connection in mappings enabled for pushdown optimization, you can push ASCII(), DATE_COMPARE(), IN(), STDDEV(), and TRUNC(DATE) functions for processing to the Snowflake database.

• You can configure a Target transformation in an elastic mapping to create a new Snowflake target at runtime.

Web Service Consumer Connector

You can use an unauthenticated proxy server when you configure NTLM authentication in a Web Service Consumer connection.

Changed behaviorThis release includes changes in behavior for the following connectors.

Formatting options for a flat file• If the columns have a qualifier in the source data, the qualifier is retained for both empty and non-empty

columns in the target.Previously, the qualifier was retained only for the non-empty columns in the target.

• If you specify a qualifier, the value of the qualifier is also considered as the escape character.Previously, the qualifier was used as the qualifier and the escape character was used to escape a character in the source data.

These changes apply to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure Data Lake Storage Gen2 Connector.

Connectors 11

Page 12: Informatica Cloud Data Integration - May 2022 - What's New

C h a p t e r 2

April 2022The following topics provide information about new features, enhancements, and behavior changes in the April 2022 release of Informatica Intelligent Cloud Services Data Integration.

New features and enhancementsThe April 2022 release of Informatica Intelligent Cloud Services℠ Data Integration includes the following new features and enhancements.

Data Integration ElasticAfter you incrementally load source files in an elastic mapping, you can run a job to reprocess files that were modified in a specific time interval. Running a reprocessing job allows you to time travel so you can create a snapshot of the data from a given time interval, debug and discover the source of bad data found in your target, or restore deleted data.

For example, you have an elastic mapping task that incrementally loads files every day at 12:00:00 p.m. On Monday, April 4, you realize that bad data was entered the previous Friday, April 1, that affected the jobs over the weekend. To fix this, you configure a reprocessing job to reload files changed after 04/01/2022 12:00:01 p.m.

For more information, see Tasks.

Expression autocompleteWhen you build an expression, Data Integration suggests functions, parameters, system variables, fields, and user-defined functions to complete the expression.

Data Integration offers autocomplete suggestions for an expression when you configure an Expression transformation with non-hierarchical data, or when you configure a user defined function.

Flat filesThis release includes the following enhancements to flat files:

• When you search for an object in a flat file connection, you can browse and select an object from subfolders within the default directory. When you create a flat file connection, the directory that you specify is the default connection directory.

12

Page 13: Informatica Cloud Data Integration - May 2022 - What's New

• You can edit the auto-generated field names in lookups for flat files with no header in a mapping.

• You can edit the metadata of flat file lookup return fields in a mapping task.

• When you configure a mapping task, you can edit field metadata for parameterized flat file source and lookup file list objects.

• You can configure a mapping task to retain design-time metadata for a parameterized flat file object.

For more information about configuring flat file connections, see Data Integration Connections. For more information about using flat file objects in mappings, see Transformations. For more information about mapping tasks, see Tasks.

Mapplet transformation namesWhen you use a mapplet in a mapping created after the April 2022 release, Data Integration prefixes the transformation names in the mapplet with the Mapplet transformation name at run time.

For example, a mapplet contains an Expression transformation named Expression_1. You create a mapping and use the mapplet in the Mapplet transformation Mapplet_Tx_1. When you run the mapping, the Expression transformation is renamed to Mapplet_Tx_1_Expression_1.

Data Integration does not update transformation names in mapplets that are used in mappings created prior to the April 2022 upgrade.

For more information about mapplets, see Components. For more information about Mapplet transformations, see Transformations.

Parameter filesYou can create a new target when you use a parameter file. If a target with the same name as the target specified in the parameter file doesn't exist, a new target is created.

For more information about parameter files, see Mappings.

Pushdown optimizationThis release includes the following enhancements to pushdown optimization:

Optimization context type

You can provide details about the optimization context for multi-insert and slowly changing dimension type 2 merge scenarios. Based on the context that you provide, Data Integration combines multiple targets in the mapping and constructs a single query for pushdown optimization.

Cancel the task

If the pushdown optimization mode that you select is not possible, you can choose to cancel the mapping task.

For more information about pushdown optimization, see Tasks or the help for the appropriate connector.

SQL connection parametersWhen you resolve SQL transformation connection parameters in a mapping task, you can configure advanced attributes for some connection types.

To see if a connector supports configuring advanced attributes, see the help for the appropriate connector. For more information about SQL transformations, see Transformations.

New features and enhancements 13

Page 14: Informatica Cloud Data Integration - May 2022 - What's New

TaskflowsThis release includes the following enhancements to taskflows:

APIs to resume a suspended taskflow

You can use the resumeWithFaultRetry resource to resume a suspended taskflow instance from a faulted step. You can also use the resumeWithFaultSkip resource to skip a faulted step and resume a suspended taskflow instance from the next step.

For more information about using the APIs to resume a suspended taskflow, see REST API Reference.

Display output fields when a Data Task step fails

When the Data Task step of a taskflow fails, you can view the output fields on the Fault tab of the My Jobs page in Data Integration, and the All Jobs page and Running Jobs page in Monitor.

You can view the output fields on the Fault tab when one of the following conditions are met:

• The On Error field is set to Ignore or Custom error handling.

• The Fail taskflow on completion option is set to If this task fails.

Using the output fields of the failed data task, you can make decisions and update the taskflow design. When you use a Decision step in a taskflow and select the field as the entire data task, the Decision step takes the Is set path by default.

For more details about faulted data tasks, see Taskflows and Monitor.

Support for data transfer task and dynamic mapping task in a Data Task step

You can add a data transfer task and dynamic mapping task to a Data Task step of a taskflow.

You can use a data transfer task in a taskflow to transfer data from a source to a target. You can use a dynamic mapping task in a taskflow to run specific groups and jobs configured in the task.

For more information about using a data transfer task and dynamic mapping task in a Data Task step, see Taskflows.

Intelligent structure modelsThis release includes the following enhancements to intelligent structure models:

Include schema elements in models based on Avro, Parquet, and ORC files

When you create an intelligent structure model that is based on an Avro, Parquet, or ORC file, Intelligent Structure Discovery includes the schema elements in the model, thus making elements that don't contain data part of the model.

Parse JSON-encoded Avro messages

You can use models that are based on an Avro schema to parse JSON-encoded Avro messages.

For more information about intelligent structure models, see Components.

TransformationsThis release includes the following enhancements to transformations.

Hierarchy Builder transformation

The Hierarchy Builder transformation can write data to a flat file. Use the file output type when the transformation processes a large amount of data and the output field size exceeds 100 MB.

14 Chapter 2: April 2022

Page 15: Informatica Cloud Data Integration - May 2022 - What's New

Hierarchy Processor transformation

The Hierarchy Processor transformation includes a flattened option for output data. Use the flattened output format to convert hierarchical input into denormalized output.

Machine Learning transformation

This release includes the following enhancements to the Machine Learning transformation.

Amazon SageMaker

The Machine Learning transformation can run a machine learning model that is deployed on Amazon SageMaker.

Sending bulk requests

You can configure the Machine Learning transformation to combine multiple API requests into one bulk request before sending the data to the machine learning model. Bulk requests can improve performance by reducing processing overhead and the amount of time that it takes to communicate with the model.

Serverless runtime environments

You can run the Machine Learning transformation in a serverless runtime environment.

For more information about the transformations, see Transformations.

Data Integration REST APIUse the code task API to submit Spark code written in Scala to an elastic cluster. You can view job results in Monitor.

For more information about the code task API, see the REST API Reference.

Platform REST APIYou can assign a Secure Agent to an existing Secure Agent group through the Informatica Intelligent Cloud Services REST API using the runtimeEnvironment resource.

For more information, see the REST API Reference.

Changed behaviorThe April 2022 release of Informatica Intelligent Cloud Services Data Integration includes the following changed behaviors.

Configuring advanced attributesWhen you resolve connection and object parameters for Source, Target, or Lookup transformations in a mapping task, you configure advanced attributes for each object. If the transformation contains a connection parameter but no object parameter, the configured object is displayed in the task.

Previously you configured advanced attributes for each connection parameter.

For more information about the advanced attributes that you can configure, see the help for the appropriate connector.

Changed behavior 15

Page 16: Informatica Cloud Data Integration - May 2022 - What's New

TaskflowsThe Publish button is added to the taskflow designer page.

Previously, the Publish option was available under the Actions menu on the taskflow designer page.

For more information about publishing taskflows, see Taskflows.

File listenerWhen you use a file listener as a source in a file ingestion task, if a notification about a file event doesn't reach the file ingestion task, the file listener queues the event and includes it in the notification it sends to the next file ingestion job. A file ingestion task thus receives a notification about each file at least once. This ensures that the file ingestion task transfers all files to the target.

Previously, if a notification about a file event didn't reach the file ingestion task, the file listener didn't continue to notify the file ingestion task about the event, and the task didn't transfer the files to the target.

For more information about file listener notifications, see Components.

ConnectorsThe April 2022 release includes the following enhanced connectors.

New connectorsThis release includes the following new connectors.

Adabas Connector

You can use Adabas Connector to connect to a PowerExchange Adabas environment to retrieve data in bulk from an Adabas source database on a z/OS system. The PowerExchange Listener retrieves metadata from the data map repository and data from the Adabas source. The data is returned to the PowerExchange Bulk Reader, which sends the data to Data Integration. Data Integration can then send the data to a supported target for a batch load.

Adabas CDC Connector

You can use Adabas CDC Connector to connect to a PowerExchange CDC environment to retrieve change records that PowerExchange captures from Adabas PLOG data sets for an Adabas source database on a z/OS system. Adabas CDC Connector extracts change records from PowerExchange Logger log files and sends the change records to Data Integration. Data Integration can then transmit the change records to a supported target.

Business 360 Events Connector

You can use Business 360 Events Connector to publish events from Business 360 applications to targets that Data Integration supports, such as Kafka and Amazon S3. You can publish events related to actions on business entity records, such as create, update, and delete.

Db2 for i Connector

You can use Db2 for i Connector to connect to a PowerExchange Db2 environment to move bulk data from or to a Db2 for i database. For relational sources and targets such as Db2 for i tables, you do not need to create

16 Chapter 2: April 2022

Page 17: Informatica Cloud Data Integration - May 2022 - What's New

a data map. The connector can import the metadata that PowerExchange reads from the Db2 catalog to create a source or target.

Db2 for z/OS Connector

You can use Db2 for z/OS Connector to connect to a PowerExchange Db2 environment to move bulk data from or to a Db2 for z/OS database. For relational sources and targets such as Db2 for z/OS tables, you do not need to create a data map. The connector can import the metadata that PowerExchange reads from the Db2 catalog to create a source or target.

IMS Connector

You can use IMS Connector to connect to a PowerExchange IMS environment to retrieve data in bulk from an IMS source database on a z/OS system. The PowerExchange Listener retrieves metadata from the data map repository and data from the IMS source. The data is returned to the PowerExchange Bulk Reader, which sends the data to Data Integration. Data Integration can then send the data to a supported target for a batch load.

IMS CDC Connector

You can use IMS CDC Connector to connect to a PowerExchange CDC environment to retrieve change records that PowerExchange captures in near real time for an IMS source database on a z/OS system. IMS CDC Connector extracts change records from PowerExchange Logger log files and sends the change records to Data Integration. Data Integration can then transmit the change records to a supported target.

SAP IQ Connector

You can use SAP IQ Connector to connect to SAP IQ database from Data Integration. Use SAP IQ Connector to write data to an SAP IQ database. You can use SAP IQ objects as targets in mappings and mapping tasks. When you use these objects in mappings, you must configure properties specific to SAP IQ.

You can only insert records when you configure a Target transformation in an SAP IQ mapping.

Sequential File Connector

You can use Sequential File Connector to connect to a PowerExchange sequential file environment to retrieve data in bulk from sequential source data sets on a z/OS system. The PowerExchange Listener retrieves metadata from the data map repository and data from the sequential data sets. The data is returned to the PowerExchange Bulk Reader, which sends the data to Data Integration. Data Integration can then send the data to a supported target for a batch load.

VSAM Connector

You can use VSAM Connector to connect to a PowerExchange VSAM environment to retrieve data in bulk from VSAM source data sets on a z/OS system. The PowerExchange Listener retrieves metadata from the data map repository and data from the VSAM data sets. The data is returned to the PowerExchange Bulk Reader, which sends the data to Data Integration. Data Integration can then send the data to a supported target for a batch load.

Enhanced connectorsThis release includes enhancements to the following connectors.

Amazon DynamoDB V2 Connector

This release includes the following enhancements for Amazon DynamoDB V2 Connector:

• You can use a serverless runtime environment to run Amazon DynamoDB V2 elastic mappings.

• You can use temporary security credentials, created by AssumeRole, to access AWS resources.

Connectors 17

Page 18: Informatica Cloud Data Integration - May 2022 - What's New

• You can use the Hierarchy Processor transformation in elastic mappings to convert hierarchical input into relational output and relational input into hierarchical output.

Amazon Redshift V2 Connector

This release includes the following enhancements for Amazon Redshift V2 Connector:

• You can configure client-side encryption for Amazon Redshift V2 sources and targets when you use a serverless runtime environment.

• When you configure a full or source pushdown optimization for an Expression transformation, you can use variables to define calculations and store data temporarily.

• You can run elastic mappings on a self-service cluster.

Amazon S3 V2 Connector

This release includes the following enhancements for Amazon S3 V2 Connector:

• You can configure an Amazon S3-compatible storage, such as Scality RING and MinIO, to access and manage the data that is stored over an S3 compliant interface.

• You can configure client-side encryption for Amazon S3 V2 sources and targets when you use a serverless runtime environment.

• After you incrementally-load source files in an elastic mapping, you can run a job to reprocess files that were modified in a specific time interval.

• You can write to a partition directory incrementally in an elastic mapping and append data to the partition directory.

• You can run elastic mappings on a self-service cluster.

• You can use a multi-character delimiter in flat files.

Databricks Delta Connector

This release includes the following enhancements for Databricks Delta Connector:

• You can use a Hosted Agent to run Databricks Delta mappings.

• You can run elastic mappings on a self-service cluster.

• Pushdown enhancements in mappings using Databricks Delta connection

- You can configure pushdown optimization for mappings in the following scenarios:

- Mappings that read from an Microsoft Azure Data Lake Storage Gen2 source and write to a Databricks Delta target.

- Mappings that read from an Amazon S3 V2 source and write to a Databricks Delta target.

- When you configure full pushdown optimization for a task that reads from and writes to Databricks Delta, you can determine how Data Integration handles the job when pushdown optimization does not work. You can set the task to fail or run without pushdown optimization.

- When you configure a full pushdown optimization for an Aggregator or Expression transformation, you can use variables to define calculations and store data temporarily.

Google BigQuery V2 Connector

This release includes the following enhancements for Google BigQuery V2 Connector:

• Pushdown enhancements in mappings using Google BigQuery V2 connection

- You can configure source pushdown optimization in mappings that read from Google BigQuery sources and write to Google BigQuery targets using the Google BigQuery V2 connection.

18 Chapter 2: April 2022

Page 19: Informatica Cloud Data Integration - May 2022 - What's New

- When you configure a full or source pushdown optimization for a mapping and a transformation is not applicable, the task partially pushes down the mapping logic to the point where the transformation is supported for pushdown optimization.

- You can read data from Google BigQuery standard and materialized views as a source and lookup object.

- When you configure a full or source pushdown optimization for an Expression transformation, you can use variables to define calculations and store data temporarily.

- When you configure a mapping enabled for full pushdown optimization to read from a Google BigQuery source and write to two Google BigQuery Target transformations that represents the same Google BigQuery table, you can enable the SCD Type 2 merge optimization mode in the task properties. In SCD Type 2 merge optimization mode, when you use two target transformations in a mapping, one to insert data and the other to update data to the same Google BigQuery target table, Data Integration combines the queries for both the Target transformations and issues a Merge query to optimize the task.

- If you want to stop a running job enabled for pushdown optimization, you can clean stop the job. When you use clean stop, Data Integration terminates all the issued statements and processes spawned by the job.

- When you enable full pushdown optimization for a task that reads from and writes to Google BigQuery, you can determine how Data Integration handles the job when pushdown optimization does not work. You can set the task to fail or run without pushdown optimization.

• When you configure a mapping to read data from a Google BigQuery source in staging mode, you can stage the data into the local staging file in Parquet format.

• When you run a mapping to a Google BigQuery target in bulk mode, Data Integration creates a CSV file in the temporary folder in the Secure Agent directory to stage the data before writing the data to the Google BigQuery target. The performance of the task is optimized when the connector uses the CSV file for staging data.

Google Cloud Storage V2 Connector

This release includes the following enhancements for Google Cloud Storage V2 Connector:

• You can run multiple elastic mappings concurrently.

• When you run elastic mappings, you can choose to import metadata for the selected object without parsing other objects, folders, or sub-folders available in the bucket. Directly importing metadata for the selected object can improve performance by reducing the overhead and time taken to parse each object available in the bucket.

• After you incrementally load source files in an elastic mapping, you can run a job to reprocess files that were modified in a specific time interval.

• When you run a mapping, you can read data from or write data to a Google Cloud Storage fixed-width flat file.

• You can use a multi-character delimiter in flat files.

Hive Connector

This release includes the following enhancements for Hive Connector:

• You can configure a Target transformation in a mapping or an elastic mapping to create a target at runtime.

• When you configure a Target transformation in a mapping or an elastic mapping to create a Hive target at runtime, you can include the partition fields of the String data type and set the order in which they must appear in the target.

Connectors 19

Page 20: Informatica Cloud Data Integration - May 2022 - What's New

• When you configure an elastic mapping to read from or write to Hive, you can use Managed Identity Authentication to stage Hive data on Azure.

• You can enable dynamic schema handling in a Hive task to refresh the schema every time the task runs. You can choose how Data Integration handles changes in the Hive data object schemas.

• You can configure a dynamic mapping task to create and batch multiple jobs based on the same mapping.

• You can configure an elastic mapping to read from or write data that contains Array and Struct complex data types. To write Array and Struct data types to Hive, you must configure the elastic mapping to create a new Hive target at runtime. You can also use a Hierarchy Processor transformation in an elastic mapping to read relational or hierarchical input and convert it to relational or hierarchical output.

Important: This functionality is available for preview. Preview functionality is supported for evaluation purposes but is unwarranted and is not production-ready. Informatica recommends that you use in non-production environments only. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. For more information, contact Informatica Global Customer Support.

JDBC V2 Connector

You can run elastic mappings on a self-service cluster.

Microsoft Azure Data Lake Storage Gen2 Connector

This release includes the following enhancements for Microsoft Azure Data Lake Storage Gen2 Connector:

• You can use Managed Identity authentication to connect to Microsoft Azure Data Lake Storage Gen2. When you use Managed Identity authentication, you do not need to provide credentials, secrets, or Azure Active Directory tokens.

• You can use shared key authentication to connect to Microsoft Azure Data Lake Storage Gen2 using the account name and account key in elastic mappings.

• You can use Microsoft Azure Data Lake Storage Gen2 Connector to connect to Microsoft Azure Data Lake Storage Gen2 on a virtual network with a private endpoint.

• After you incrementally-load source files in an elastic mapping, you can run a job to reprocess files that were modified in a specific time interval.

• You can use a multi-character delimiter in flat files.

Microsoft Azure Synapse SQL Connector

This release includes the following enhancements for Microsoft Azure Synapse SQL Connector:

• Pushdown enhancements in mappings using Microsoft Azure Synapse SQL connection

- When you configure a mapping enabled for full pushdown optimization to read from a Microsoft Azure Data Lake Storage Gen2 source and write to a Microsoft Azure Synapse SQL target, you can use the shared key authentication to connect to Microsoft Azure Data Lake Storage Gen2 using the account name and account key.

- When you configure full pushdown optimization for a task, you can determine how Data Integration handles the job when pushdown optimization does not work. You can set the task to fail or run without pushdown optimization.

- If you want to stop a running job enabled for pushdown optimization, you can clean stop the job. When you use clean stop, Data Integration terminates all the issued statements and processes spawned by the job.

- When you configure a full or source pushdown optimization for an Expression transformation, you can use variables to define calculations and store data temporarily.

20 Chapter 2: April 2022

Page 21: Informatica Cloud Data Integration - May 2022 - What's New

• You can use Managed Identity authentication to connect to Microsoft Azure Data Lake Storage Gen2 when used to stage files for Microsoft Azure Synapse SQL. When you use Managed Identity authentication, you do not need to provide credentials, secrets, or Azure Active Directory tokens.

• You can use Microsoft Azure Synapse SQL Connector to connect to Microsoft Azure Synapse SQL on a virtual network with a private endpoint.

• You can map the IDENTITY column for a target object in mappings and elastic mappings.

Microsoft SQL Server Connector

This release includes the following enhancements for Microsoft SQL Server Connector:

• When you configure a full or source pushdown optimization for an Expression transformation, you can calculate an unique checksum value for a row of data each time you read data from a source object.

• You can push a few additional functions such as data type conversion and string operations to the Microsoft SQL Server database by using full pushdown optimization.For more information about the supported functions, see the help for Microsoft SQL Server Connector.

MongoDB V2 Connector

This release includes the following enhancements for MongoDB V2 Connector:

• You can configure both Atlas and self-managed X509 certificate-based SSL authentication in a MongoDB V2 connection to read from and write data to MongoDB.

• You can parameterize the MongoDB V2 source object, target object, and the connection in elastic mappings.

• You can use the Hierarchy Processor transformation in elastic mappings to convert hierarchical input into relational output and relational input into hierarchical output.

• You can use a serverless runtime environment to run MongoDB V2 elastic mappings.

• You can read and write hierarchical data types such as, Array, Object, and ObjectID. To write the hierarchical data types to MongoDB V2, you must configure the mapping to create a new MongoDB V2 target at runtime.

ODBC Connector

This release includes the following enhancements for ODBC Connector:

• You can perform an upsert operation to update or insert data to a Teradata target when you configure a full pushdown optimization with the Teradata ODBC connection.

• When you configure a full pushdown optimization with the Teradata ODBC connection, you can push the TO_CHAR(), TO_DATE(), and a few additional functions to the Teradata database.For more information about the supported functions that you can use with pushdown optimization, see the help for ODBC Connector.

PostgreSQL Connector

You can choose how Data Integration handles changes that you make to the data object schemas. You can also refresh the schema every time when you run a PostgreSQL task.

Rest V2 Connector

You can use the PATCH HTTP method in source, target, and midstream transformations. Use this method to update existing resources.

Rest V3 Connector

You can use REST V3 Connector in a serverless runtime environment.

Connectors 21

Page 22: Informatica Cloud Data Integration - May 2022 - What's New

Salesforce Connector

You can use Salesforce Bulk API 2.0 to perform bulk read and write operations.

Salesforce Analytics Connector

When a connection to Salesforce Analytics fails, the Secure Agent retries a maximum of three attempts at a 300-second interval to establish the connection.

SAP HANA Connector

You can use a serverless runtime environment to run SAP HANA mappings.

Snowflake Data Cloud Connector

This release includes the following enhancements for Snowflake Data Cloud Connector:

• Pushdown enhancements in mappings using the Snowflake Data Cloud connection:

- When you configure a mapping enabled for full pushdown optimization to read from a Snowflake source and write to multiple Snowflake targets, you can enable the following optimization modes in the task properties based on the target operations you specify:

- Multi-insert. Enable this mode when you insert data to all the Snowflake targets defined in the mapping. Data Integration combines the queries generated for each of the targets and issues a single query.

- SCD Type 2 merge. Enable this mode when you use two target transformations in a mapping, one to insert data and the other to update data to the same Snowflake target table. Data Integration combines the queries for both the targets and issues a Merge query.

None is selected by default. When you enable the Multi-insert or the SCD Type 2 merge optimization context, the task is optimized.

- When you configure full pushdown optimization for a task that reads from and writes to Snowflake, you can determine how Data Integration handles the job when pushdown optimization does not work. You can set the task to fail or run without pushdown optimization.

- When you configure pushdown optimization for a mapping that contains an Expression transformation, you can use variables in the expression to define calculations and store data temporarily.

- You can use a reusable sequence in an SQL transformation in a mapping enabled for full pushdown optimization. When you run multiple jobs with the reusable sequence, each session receives unique values in the sequence.

- When you configure an SQL transformation in a mapping enabled for pushdown optimization, you can include functions in an entered query and run queries with the Snowflake target endpoint. For the list of functions that you can use in an entered query, see the help for Snowflake Data Cloud Connector.

- If you want to stop a running job enabled for pushdown optimization, you can clean stop the job. When you use clean stop, Data Integration terminates all the issued statements and processes spawned by the job.

- You can use full pushdown optimization to push the SESSSTARTTIME variable to the Snowflake database.

• When you run a mapping to write data to Snowflake, Data Integration, by default, creates a flat file in a temporary folder on the Secure Agent machine to stage the data before writing to Snowflake. The performance of the task is optimized when the connector uses the flat file for staging data.

• You can run elastic mappings on a self-service cluster.

22 Chapter 2: April 2022

Page 23: Informatica Cloud Data Integration - May 2022 - What's New

Changed behaviorThis release includes changes in behavior for the following connectors.

Data type changes in elastic mappings

When you select an existing target to write data in elastic mappings, note the following data type changes:

• When you run an elastic mapping to write to Avro, JSON, ORC, or Parquet files having data of the boolean data type, the data is written in boolean in the target.Previously, data of the boolean data type was written in integer in the target.

• When you run an elastic mapping to write to Avro, ORC, or Parquet files having data of the float data type, the data is written in float in the target.Previously, data of the float data type was written in double in the target.

• When you run an elastic mapping to write to Avro, ORC, or Parquet files having data of the date data type, the data is written in date in the target.Previously, data of the date data type was written in timestamp in the target.

These changes apply to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure Data Lake Storage Gen2 Connector.

Delimited format type

Effective in this release, the Delimited format type in the formatting options is renamed to Flat.

This change applies to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure Data Lake Storage Gen2 Connector.

Flat files in elastic mappings

When you use an elastic mapping to read data from a flat file, you can change the data types before you write to the target.

Previously, the default data type for all fields was set to string. If you modified the data types, the change did not reflect in the target.

Formatting options for a flat file

Effective in this release, the flat file formatting options include the following changes:

• The escape characters in the source data are retained in the target data whether you enable or disable the Is Escape Character Data Retained option.Previously, the escape characters were retained in the target data only if you enabled the Is Escape Character Data Retained option.

• When you set the Qualifier Mode to Minimal and if special characters or unicode characters are enclosed within a qualifier in the source data, the qualifier is not retained in the target.Previously, the qualifier was retained in the target.

• If there is an empty row in the source data, the empty row is written as it is in the target.Previously, a qualifier was added to the first column of the empty row in the target.

• If the columns have a qualifier in the source data, the qualifier is retained only for the non-empty columns in the target.Previously, the qualifier was retained for both empty and non-empty columns in the target.

• When you use an escape character to escape a character in the source data that is also specified as a qualifier, the escaped character is retained in the target data.Previously, an extra qualifier was added to the escaped character in the target data.

These changes apply to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure Data Lake Storage Gen2 Connector.

Connectors 23

Page 24: Informatica Cloud Data Integration - May 2022 - What's New

Multi-character delimiter in flat files

Effective in this release, when you read data from a flat file and specify a multi-character delimiter, all the characters together are considered as the value of the delimiter.

Previously, if you specified a multi-character delimiter, only the first character was considered as the value of the delimiter.

For example, if you specify ^|^ as the delimiter, the three characters together are considered as the value of the delimiter.

Previously, only the first character ^ was considered as the value of the delimiter.

This change applies to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure Data Lake Storage Gen2 Connector.

SQL queries in task logs for pushdown optimization

When you run a mapping enabled for pushdown optimization, the issued SQL queries in the task logs are formatted and user friendly.

Previously, the issued SQL queries were unformatted and appeared in a single line.

This change is not applicable for pushdown optimization through the ODBC connector.

Amazon Redshift V2 Connector

Effective in this release, Amazon Redshift V2 Connector includes the following changes:

• Even when you do not map the NOT NULL columns that have default values in an Amazon Redshift target table, the insert, update, or upsert operation is successful and the default values for NOT NULL columns are used.Previously, if you did not map the NOT NULL columns, the operation failed.

To retain the previous behavior, set the JVM option -DRetainUnmappedNotNullColumnValidation value to true in the Secure Agent properties.

• When you read data that contains columns of the decimal data type, the scale that you set for the decimal data type columns in the Amazon Redshift UI is honored.Previously, the scale that you set for decimal data type columns in the Amazon Redshift UI was not honored and a value greater than the defined scale was also read.

To retain the previous behavior, set the JVM option -honorDecimalScaleRedshift value to false in the Secure Agent properties.

• When you configure an Aggregator transformation in a mapping enabled for pushdown optimization and you do not include the incoming field from an aggregate function or a group by field in a field mapping, Data Integration uses the ANY_VALUE() function to return any value.

Previously, when you defined how to group data for aggregate expressions in an Aggregator transformation, you had to include each of the incoming fields from an aggregate function or a group by field in the field mapping.

• If the mapping enabled for pushdown optimization contains Union and Aggregator transformations, include the incoming field from the aggregate function or group by field in the field mapping, or remove the field from the aggregate function or group by field altogether. Otherwise, the mapping runs without pushdown optimization.

Previously, the task partially pushed down the mapping logic to the point where the transformation is supported and runs without pushdown optimization.

Databricks Delta Connector

Effective in this release, when you configure mappings, the processing logic is pushed by default to the Databricks Delta SQL endpoint.

24 Chapter 2: April 2022

Page 25: Informatica Cloud Data Integration - May 2022 - What's New

Previously, you had to configure the Secure Agent properties to use the Databricks Delta SQL endpoint.

Google BigQuery V2 Connector

Effective in this release, Google BigQuery V2 Connector includes the following changes:

• When you write data to a Google BigQuery target in bulk mode and use CSV mode as the staging file format, you can use a precision of upto 15 digits for a column of Float or Double data type.Previously, you can set the precision of upto 17 digits for a column of Float or Double data type.

• When you migrate a mapping or an elastic mapping and write data to a Google BigQuery target created at runtime and you override the target table and dataset name, the Secure Agent creates the target with the overridden target table name irrespective of the Create Disposition value.Previously, the mapping or elastic mapping failed to create the target with the overridden target table name if the Google BigQuery target did not exist and the Create Disposition property was set to Create never.

Hive Connector

Effective in this release, when you run a task, Data Integration logs messages in the following directory: <Secure Agent installation directory>/apps/Data_Integration_Server/logs/tomcat/<version>.log

Previously, Data Integration logged messages in the following directory: <Secure Agent installation directory>/apps/Data_Integration_Server/<version>/tomcat.out

Snowflake Data Cloud Connector

Effective in this release, Snowflake Cloud Data Warehouse V2 Connector is renamed to Snowflake Data Cloud Connector. You must use the Snowflake Data Cloud connection type in mappings to read from or write to Snowflake.

Connectors 25

Page 26: Informatica Cloud Data Integration - May 2022 - What's New

C h a p t e r 3

UpgradeThe following topics provide information about tasks that you might need to perform before or after an upgrade of Informatica Intelligent Cloud Services Data Integration. Post-upgrade tasks for previous monthly releases are also included in case you haven't performed these tasks after the previous upgrade.

Preparing for the upgradeThe Secure Agent upgrades the first time that you access Informatica Intelligent Cloud Services after the upgrade.

Files that you added to the following directory are preserved after the upgrade:

<Secure Agent installation directory>/apps/Data_Integration_Server/ext/deploy_to_main/bin/rdtm-extra

Perform the following steps to ensure that the Secure Agent is ready for the upgrade:

1. Ensure that each Secure Agent machine has sufficient disk space available for upgrade.

The machine must have 5 GB free space or the amount of disk space calculated using the following formula, whichever is greatest:

Minimum required free space = 3 * (size of current Secure Agent installation directory - space used for logs directory)

2. Close all applications and open files to avoid file lock issues, for example:

• Windows Explorer

• Notepad

• Windows Command Processor (cmd.exe)

Post-upgrade tasks for the May 2022 releasePerform the following tasks after your organization is upgraded to the May 2022 release.

26

Page 27: Informatica Cloud Data Integration - May 2022 - What's New

Date and Int96 data types in Avro and Parquet filesAfter the upgrade, an elastic mapping configured to read from or write to an Avro or Parquet file fails in the following cases:

• Data is of the Date data type and the date is less than 1582-10-15.

• Data is of the Int96 data type and the timestamp is less than 1900-01-01T00:00:00Z.

To resolve this issue, specify the following spark session properties in the mapping task or in the custom properties file for the Secure Agent:

• spark.sql.legacy.timeParserPolicy=LEGACY• spark.sql.parquet.int96RebaseModeInWrite=LEGACY• spark.sql.parquet.datetimeRebaseModeInWrite=LEGACY• spark.sql.parquet.int96RebaseModeInRead=LEGACY• spark.sql.parquet.datetimeRebaseModeInRead=LEGACY• spark.sql.avro.datetimeRebaseModeInWrite=LEGACY• spark.sql.avro.datetimeRebaseModeInRead=LEGACYThis upgrade impact applies to Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure Data Lake Storage Gen2 Connector.

Post-upgrade tasks for the April 2022 releasePerform the following tasks after your organization is upgraded to the April 2022 release.

TLS 1.0 and 1.1 disablement for the Secure AgentIn the April 2022 release of Informatica Intelligent Cloud Services, support for Transport Layer Security (TLS) 1.0 and 1.1 is disabled on the Secure Agent. The Secure Agent uses Transport Layer Security (TLS) version 1.2.

Data that passes between Informatica Intelligent Cloud Services and the Secure Agent is always encrypted using TLS 1.2. You do not need to reconfigure the agent or take any action to enable the agent to communicate with Informatica Intelligent Cloud Services.

Data that passes between the Secure Agent and connector endpoints is also encrypted using TLS 1.2. If you use a connector or access a connection endpoint that uses TLS 1.0 or 1.1, Informatica recommends that you upgrade to a version that uses TLS 1.2. If you cannot do this, you can re-enable TLS 1.0 and 1.1 on the Secure Agent by following the instructions in this KB article: HOW TO: Enable TLS 1.0 and 1.1 on the Secure Agent in Cloud Data Integration.

Amazon Redshift V2 ConnectorEffective in this release, you must map all the fields from the SQL query advanced source property to the target for the mappings enabled for pushdown optimization to run successfully.

After you upgrade, to run the existing mappings enabled for pushdown optimization that have only a few fields from the SQL query mapped to the target successfully, you must modify the mappings and map all the fields from the SQL query to the target.

Post-upgrade tasks for the April 2022 release 27

Page 28: Informatica Cloud Data Integration - May 2022 - What's New

Amazon S3 V2 ConnectorAfter the upgrade, existing elastic mappings configured to read from a JSON partition column fail if you chose to override the folder path in the advanced source properties.

To run the existing mappings successfully, click on the Refresh button in the Fields tab or select the source again to refresh the metadata.

Amazon S3 bucket policy for elastic mappingsEffective in this release, when you run an elastic mapping, in addition to the existing minimum required permissions that you configure for the Amazon S3 buckets, you must configure an additional permission ListBucketMultipartUploads to successfully read data from and write data to AWS resources.

After you upgrade, to run the existing elastic mappings successfully, you must modify the IAM permission for the user to include the Amazon S3 bucket permission ListBucketMultipartUploads.

This upgrade impact is applicable for Amazon S3 V2 Connector and Amazon Redshift V2 Connector.

Connection with TLS 1.0 or 1.1After the upgrade, existing mappings fail in the following connectors:

• Microsoft SQL Server Connector

• MySQL Connector

• Oracle Connector

• PostgreSQL Connector

When you run the existing mappings, the mappings fail in the following scenarios:

• The connection uses the TLS 1.0 or 1.1 protocol to connect to the source or target endpoint.To run mappings successfully, edit the connection properties, and from the Crypto Protocol Version option, select TLSv1.2 instead of TLSv1 or TLSv1.1.

• The connection uses the TLS 1.2 protocol, but the source or target endpoint that the connector accesses does not support TLS 1.2 protocol.To run mappings successfully, Informatica recommends upgrading to an endpoint version that supports TLS 1.2.

Databricks Delta ConnectorEffective in this release, when you configure mappings, the processing logic is pushed by default to the Databricks SQL endpoint.

After you upgrade, if you want to use existing mappings running on Databricks analytics or Databricks data engineering cluster, configure the following properties based on the type of operation you want to perform:

Operation Configuration

Import metadata Set the JRE_OPTS property for the Data Integration Service of type Tomcat JRE to the following value: -DUseDatabricksSql=false.

Run mappings¹ Set the JVMOption property for the Data Integration Service of type DTM to the following value: -DUseDatabricksSql=false.

28 Chapter 3: Upgrade

Page 29: Informatica Cloud Data Integration - May 2022 - What's New

Operation Configuration

Run mappings enabled with pushdown optimization¹

Set the JVMOption property for the Data Integration Service of type DTM to the following value: -DUseDatabricksSqlForPdo=false.

¹Applies only to mappings.

Elastic clusters in an AWS environmentEffective in this release, Data Integration Elastic uses kubeadm as the cluster operator for elastic clusters in an AWS environment. With this change, the Secure Agent proxy server must have access to certain Amazon S3 buckets, the cluster operator policy requires additional permissions, and the ELB security group requires additional inbound traffic rules.

Perform the following tasks:Configure the Secure Agent proxy server

If your organization uses an outgoing proxy server, allow traffic to the following URLS:

• .s3.amazonaws.com

• <S3 staging bucket>.s3.<bucket region>.amazonaws.com

When you use an Amazon S3 or Amazon Redshift object as a mapping source or target, also allow traffic to each source and target bucket that the agent will access.

If your organization does not use an outgoing proxy server, contact Informatica Global Customer Support to disable the proxy settings used for S3 access.

Grant permissions to the cluster operator policy

Add the following permissions to the cluster operator policy:

ec2:CreateLaunchTemplate

ec2:CreateLaunchTemplateVersion

ec2:DeleteLaunchTemplate

ec2:DeleteLaunchTemplateVersions

ec2:DescribeLaunchTemplates

ec2:DescribeLaunchTemplateVersions

Previously, the cluster operator policy was called the kops policy and these permissions were optional.

Configure the ELB security group

If you create user-defined security groups, add inbound rules for the ELB security group to allow the following traffic:

• Incoming traffic from the Secure Agent that creates the cluster.

• Incoming traffic from master nodes in the same cluster.

• Incoming traffic from worker nodes in the same cluster.

For more information, see Data Integration Elastic Administration.

Post-upgrade tasks for the April 2022 release 29

Page 30: Informatica Cloud Data Integration - May 2022 - What's New

Flat files with UTF-8-BOM encodingAfter the upgrade, an elastic mapping configured to read a flat file with UTF-8-BOM encoding does not map the first column in the source to the target.

To map the first column, you must synchronize all the fields from the source object and rerun the elastic mapping.

This upgrade impact is applicable for Amazon S3 V2 Connector, Google Cloud Storage V2 Connector, and Microsoft Azure Data Lake Storage Gen2 Connector.

Microsoft Azure Synapse SQL ConnectorAfter the upgrade, an existing elastic mapping configured to read data from and write data to Microsoft Azure Synapse SQL might fail if the source fields were dropped after the mapping was created.

To run the existing elastic mapping successfully, you must synchronize all fields with the source object and rerun the elastic mapping.

Microsoft SQL Server ConnectorAfter the upgrade, existing mappings enabled for pushdown optimization in which you push the MD5() function to the Microsoft SQL Server database through an Expression transformation return a different value for the nchar datatype as compared to a mapping that runs without pushdown optimization.

Previously, the MD5() function configured in an Expression transformation ran without pushdown optimization even when you enabled the mappings for pushdown optimization.

To achieve the existing behavior, run the existing mappings without pushdown optimization.

SAP ConnectorAfter the upgrade, if you use the HTTPS connection in SAP mappings, new and existing SAP Table Reader and SAP BW Reader mappings might fail in the following scenarios:

• ABAP Kernel version is 753 or earlier and the CommonCryptoLib SAP system is earlier than 8.4.31.To run mappings successfully, you must upgrade the CommonCryptoLib SAP system to version 8.4.31 or later. To know more about upgrading the SAP system, see the SAP documentation.

• ABAP Kernel version is 753 or earlier and the CommonCryptoLib SAP system is 8.4.31 or later.To run mappings successfully, you must enable the TLS 1.2 protocol in the SAP system.

For more information about enabling the TLS 1.2 protocol in the SAP system, see SAP Note 510007.

SSE-KMS encryption for elastic mappingsEffective in this release, an existing elastic mapping enabled for SSE-KMS encryption fails when the connector uses the default IAM role and uses the credentials from the ~/.aws/credentials location.

After you upgrade, to run the existing mappings successfully, you must perform one of the following steps:

• To use the credentials from the ~/.aws/credentials location, you must create the master instance profile and the worker instance profile in AWS, attach the KMS policy to the worker profile, and specify the profiles in the cluster configuration.

• Use the Secure Agent on Amazon EC2, create the master instance profile and the worker instance profile in AWS, and attach the KMS policy to the worker profile.

30 Chapter 3: Upgrade

Page 31: Informatica Cloud Data Integration - May 2022 - What's New

• Use the Secure Agent on Amazon EC2, use the default IAM role, and attach the KMS policy to the Secure Agent role.

This upgrade impact is applicable for Amazon S3 V2 Connector and Amazon Redshift V2 Connector.

File Integration Service proxyIf you use the file integration proxy server, update the server with the latest version of the fis-proxy-server_<version>.zip file.

For more information, see What's New in the Administrator help.

Post-upgrade tasks for the April 2022 release 31

Page 32: Informatica Cloud Data Integration - May 2022 - What's New

C h a p t e r 4

Enhancements in previous releases

You can find information on enhancements and changed behavior in previous Data Integration releases on Informatica Network.

What's New guides for releases occurring within the last year are included in the following community article: https://network.informatica.com/docs/DOC-17912

32

Page 33: Informatica Cloud Data Integration - May 2022 - What's New

I n d e x

CCloud Application Integration community

URL 5Cloud Developer community

URL 5code task API

enhancements 15creating targets

using parameter files 13

DData Integration community

URL 5

Eemail address

verifying 7

FFile Integration Service proxy 31fis-proxy-server_.zip 31flat file enhancements 12

HHierarchy Processor transformation

enhancements 14

IInformatica Global Customer Support

contact information 6Informatica Intelligent Cloud Services

web site 5

Mmaintenance outages 6

Pparameter files

creating targets 13

RREST API

enhancements 15platform enhancements 8, 15

SSecure Agent

installer checksum 8Secure Agents

upgrade preparation 26status

Informatica Intelligent Cloud Services 6system status 6

Ttransformations

enhancements 14trust site

description 6

Uupgrade notifications 6upgrade preparation

Secure Agent preparation 26user profile

verifying email 7

Wweb site 5

33