Cognizant interview questions on Informatica
- 1) What is Star schema in informatica?
- 2) What is snow flake schema ?
- 3) How do remove duplicates values in informatica from DB and Flat file?
- 4) What are different types of SCD?
- 5) What is Performance tuning in Informatica?
- 6) What is Lookup Transformation in Informatica?
- 7) What is a difference between joiner and lookup transformation?
- 8) What are different types of loading in Informatica?
- 9) What is incremental loading is Informatica?
- 10) Which type of transformation is not supported by mapplets?
- 11) What are bridge tables in informatica?
Below are the list of Best Cognizant Informatica interview questions and Answers
Star Schema in a data warehouse, in which the center of the star can have one fact table and a number of associated dimension tables. As its structure resembles a star so it is known as a star schema.
In the data warehouse, Snowflake Schema is a logical arrangement of tables in a multidimensional database
distinct property(keyword) is used to eliminate duplicate rows in Informatica from a database and a flat-file.
We can eliminate duplicate rows from the flat-file by using group by function in an aggregator or in source qualifier in database.
You can select distinct all or by using sorter transformation in flat-file.
You can remove duplicates values in Informatica from DB as follows:
You can use the Distinct option of the Source Qualifier in the source table and load the target accordingly.
You can remove duplicates values in Informatica from Flat-file as follows:
When the source system is a Flat File distinct option is disabled in the source qualifier. You may use a Sorter Transformation and check the Distinct option. Then all the columns will the selected as keys, in ascending order by default.
The different types of SCD are as follows:
Type 0 – Fixed Dimension. No changes allowed, dimension never changes.
Type 1 – No History. Update record directly, there is no record of historical values, only the current state.
Type 2 – Row Versioning.
Type 3 – Previous Value column.
Type 4 – History Table. Type 6 – Hybrid SCD.
Performance tuning is the improvement of system performance that starts with the identification of bottlenecks in the source, target, and mapping and further to session tuning. Performance tuning aimed to optimize session performance by eliminating performance bottlenecks to get a better acceptable ETL load time.
Lookup transformation is a kind of join operation. Lookup transformation is a passive transformation. It has one of the joining tables as the source data and the other joining table known as the lookup table. Basically, It is used to look up a source, source qualifier, or target to get the relevant data. It also returns the result of the lookup to the target or another transformation.
The difference between joiner and lookup transformation is as follows:
|Joiner :||Lookup :|
|A joiner is used to join data from different sources.||lookup is used to get related values from another table or check for updates etc in the target table.|
|It supports Equiv Join only.||It supports both Equiv and Non-Equiv join.|
|It may be we can perform outer join only.||Lookup used to source as well as the target.|
|Joiner is used to source only.||It can not perform the outer join in lookup.|
|It may used only the "=" operator.||It may used = , < , > , <= . >= operators.|
|In joiner maybe not present in lookup override.||It may be present in the lookup override option.|
|Joiner table must participate in mapping.||The lookup may not participate in mapping|
|Joiner is used for joining two homogeneous or heterogeneous sources residing at different locations.||Lookup is used to look-up the data.|
|Joiner is an Active Transformation.||Lookup transformation is a Passive transformation.|
The different types of loading in Informatica are as follows:
- Normal loading
- Bulk loading
Normal loading loads record by record and writes log for that. Bulk loading loads the data fast and recovered easily.
Incremental data loading is defined as the process of loading the selective data. It is either updated or creates a new source system to the target system. This is different from a full data load where entire data is processed each load.
Normalizer, Cobol sources, XML sources, XML Source Qualifier transformations, Target definitions, Pre- and Post- session Stored Procedures, Other Mapplets are not supported by Mapplets.
A bridge table acts between a fact table and a dimension table. It is used to resolve many-to-many relationships between a fact and a dimension. The bridge table will contain only two dimension columns and key columns in both dimensions.
Related Interview Questions
Java Interview Questions asked in Citibank
Appinventiv Php Developer Interview Questions
Capgemini Java interview questions
NIIT Technologies Java Interview Questions
Cyient Java developer Interview Questions
Abinitio Interview Questions
CyberArk Interview Questions
HCL Testing interview questions
HCL freshers interview questions
HCL Java Developer Interview Questions
HCL Android Developer Interview Questions
HCL IOS Developer Interview Questions
Cognizant Mulesoft interview questions
Cognizant Hadoop Interview Questions
Cognizant Java Interview Questions
Cognizant Cognos Interview Questions
Cognizant Testing interview questions
Cyient GIS Interview Questions
Cyient Design Engineer Interview Questions
Subscribe Our NewsLetter
Never Miss an Articles from us.