Informatica Interview Questions

Informatica Interview Questions

Informatica: Introduction

Informatica is actually a power center. It is widely used as an information, extraction, transformation, and loading tool. It is further used to build an enterprise data warehouse. The components, which are located within Informatica help in extracting data from its sources and further using it for business requirements. But in order to use the data for business requirements, you need to convert it first. And finally, the data is loaded onto a target data warehouse.

Informatica: Career Prospects

Informatica has got the widespread ability in terms of efficient data processing, data partitioning, and bulk extraction. This altogether contributes to the maximum quality work at any large-scale organization. Almost maximum of business domains utilize Informatica tools and thus career prospects are endless in the field of Informatica. But in order to land a good job in Informatica, you need to successfully crack Informatica interview questions.

Cracking Informatica Interview Questions

Cracking Informatica interview questions is not exactly rocket science. No doubt since the web world is rapidly evolving, there are more and more challenges in the Informatica field. And you need to understand that you will not be ever able to know it all. What you need to do is brush up on your foundation. Try to know and understand your basics. Clear up the basic know-how of Informatica and this will help you to get an upper hand in the Informatica interview. Also, be open and honest about what you know and what you don’t know. This will give a clear picture of the mind of the interviewers. But you must display a learning attitude. Make sure that you always tell the interviewer that you are willing to learn new things and incorporate new skills which will further help you in your personal and professional growth.

Following is the list of some Informatica Interview Questions and their answers.

Download Informatica Interview Questions PDF

Below are the list of Best Informatica Interview Questions and Answers

There are four Schema’s They are:
  • Star Schema
  • Snow Flake Schema
  • Galaxy Schema
  • Fact Constellation Schema
In enterprise data warehousing all organization data is created at a single point of access .data can be provided a global view to the server through a single source storage.we can do periodic analysis on the same source.it takes time but it gives better results.
A Data Mart Is a simple form of a data warehouse that is focused on a single subject or functional areas such as sales or finance or marketing.
A Data warehouse is a collection of multiple functional areas.it is the central unit which is made by combining all the data marts
We can delete the duplicate rows in two ways which are coming from the source. They are
  1. Using Source Qualifier, under properties tab select Distinct
  2. Using Sorter, under properties tab select distinct

First one is the better option when the source is relational but when a source is a file then you need to use the second option only.

In Informatica Linking more than one session called batches, there are two types of batches
  • Sequential batches: sequential batches session runs one by one
  • Concurrent batch: concurrent batches session runs at the same time

Yes, there are difficulties I found while working with flat files some of them are:

  1. We can’t use SQL override in flat files instead of that we have to use transformations.
  2. testing the flat files is a very uninteresting job
  3. We need to specify the correct path in the session and mention whether that file is ‘direct’ or ‘indirect. Keep that file in the exact path which we have specified in the session and we have to keep the file in the exact path as mentioned in the session.
  4. If we miss the link to any column of the target, then all the data will be placed in the wrong fields. That missed column won’t exist in the target data file.

Decode is a special function that searches a port for the specified value. It is a better option to use DECODE function when the number of conditions is large because it is less costly compared to the IF function. and it can use in a select statement whereas we can not use IIF Function In select Statement.

We can’t compare which one is a better joiner or look up because both have their own functionalities .its all depends on table size and source. If we are using flat file sorted joiner is more effective than look up because of joiner caches fewer rows .lookup caches who file in always. If we are using database lookup is effective when the database can return sorted data fast and the amount of data is small.
Domains are primary component Of Informatica power center, it is used for managing and administrating various services in Informatica power center.
Mapplet is used for creation and configuration of a group of transformation. A worklet is an object by combining the set of tasks to build a workflow logic.It can be reusable in multiple workflows, which can be configured to run concurrently.
In terms os Informatica power center is a service-oriented architecture which provides the capability to share service & resource across multiple machines.
Cumulative sum is partial sums of a given column that can calculate by using CUME(COLUMN NAME).
moving sum returns the sum of a specifies set of rows. It ignores null values when calculating the moving sum so that if all values are NULL, Then the function Returns NULL
MONINGSUM(ROWNAME,Rowsetvalue)
A session is a set of instructions which explains the movement of data from source to target.
There are two type of sessions in Informatica
  • Reusable Sessions
  • Non-Reusable  Sessions
A transformation is a repository object which generates, updates or sends the source data according to the requirements of the target system. It is mainly classified into 4 types
  • Active
  • Passive
  • connected,
  • unconnected
A surrogate key is a unique Identification key and acts as natural primary key .it is unique for each row of the table that why it is very useful to identify each unique row.
PMCMD(power Mart Command) is a built-in command line program or utility that is used to interact with the Informatica server .this Command mainly used to perform the following tasks.
  • Scheduling a workflow
  • Start or stop workflow
  • Start workflow from a task
  • Stopping / Aborting specific task
  • Aborting workflow
No both kill the process but the way is different, the process timeout period is the main difference in Stop and Abort.
Stop will Stop reading from the source and release the memory blocks that are occupied by the session at a time but it will continue updating and committing changes in target whereas abort also stop reading but it has the timeout period of 60 sec. If the session fails to update or commit the changes within 60 sec then the session will be aborted by terminating the DTM process thread and it will not release the memory immediately.
To ensure Data Consistency Stop Will try to Rollback data but in Abort kill the process immediately so that can’t be rolled back.
A group of sessions executed either Serial Or-Parallel Execution By The Informatica Server is called a batch.
There are two types of batches are theirs.
batch run session one after another is called Sequential batch and batch which run a session at the same is concurrent.
Yes, we can generate sequence numbers with expression transformation instead of the sequence generator transformation.
In the static cache, the cache memory will not refresh even though the record is inserted or updated in the lookup table it will refresh when in next session run.
In dynamic caches, cache memory will refresh as soon as the record is inserted or updated in the look-up table.
ETL(Extract-Transform-Load) itself tell that is extract, transform and load the data to the source to destination for better decision making.if we don’t use ETL tools then we have to do all these things by manually by creating SQL codes that are not possible for an end user that can do only expert programmers.this process was very tedious and cumbersome in many cases because it involved many resources and complex coding.these difficulties are eliminated by ETL Tool because they are very comfortable to use and many other advantages in all stages like visual flow, structured system design, operational resilience, impact analysis, data profiling and cleansing and excellent performance.
Fact table is the centralized table in a star schema that’s why it is also called central table. Fact table mainly contains the facts nothing measurement (values)that is related to the data in the dimension table. it typically has two types of columns one contains facts(values) and another contains foreign keys to dimension tables. a composite key that is made by all of its foreign keys is acts as a primary key of a fact table.
Polling means displaying the updated data about the session while it is running in a window. The monitor window displays the status of each session when you poll the Informatica server.
Dimension table consists of attributes about the facts and it stores the textual descriptions of the business. dimensions are very important to measure the facts. and they are mainly 4 types.
  • Conformed Dimension
  • Degenerated Dimension
  • Role-playing dimension
  • Junk Dimension
Not directly we could not generate reports, but we can generate metadata report by using Informatica Metadata driven reporting Tool.
Rank transformation is an active and Connected transformation. it is used to select bottom or top rank of data this means to select the smallest or largest numeric value in a group or port. rank index port used by Informatica server to store the rank position for each record A Designer can create a rank index port automatically for each rank transformation.For Example, if you create a Rank Transformation that ranks the top 10 products for each quarter, then the rank index numbers products from 1-10.
Target load plan Target load order (or) is used to specify the order in which the integration service loads the targets. We can specify a target load order based on the Source Qualifier transformations in a mapping. if we are working with multiple source qualifier transformations connected to multiple targets  then we can designate the order in which the integration service loads data into targets.
The Integration Service process starts the data Transformation Manager (DTM)process when the workflow reaches a session.DTM process is also known as the PMDTM process. The main purpose of the DTM process is to create and manage threads that carry out the session tasks.
We can generate sequence numbers by using sequence generator or expression transformation.
Normalizer transformation because COBOL sources consist of denormalized data, normalizer transformation is essential to get normalized data.
The Active and passive transformations transform the records in two different ways. Inactive transformation changes the number of rows number of rows that pass through the mapping. In case of Passive transformation don’t change the number of rows.
We can create ports in two ways  by Dragging the port from another transformation and another by clicking the add port button on the ports tab
Index cache contains all of the port values which satisfies the given condition those are stored in index cache data cache stores port values which are not satisfies the given condition is stored in a data cache.
When the integration service waits for a row from a different input group. Some multiple input group transformations require the integration service to block data at an input group.  A  blocking transformation is a multiple input group transformation that blocks incoming data. Custom transformation and Joiner transformation comes into this category.
Transaction Control is connected and an active transformation and it is used to control the rollback and commit of transactions .we can specify the transaction depending on varying number of input rows.It is used in two levels mapping and session levels.
We can change a non reusable transformation to reusable transformation by selecting the reusable transformation in the navigation bar and drag it to the mapping and hold the control key just before releasing the transformation then release into mapping.

Informatica transformations are repository entities that can read, change or pass data to the defined target structures like tables, files, or any other structures required. Active transformations are those which alters the data rows and the number of input rows passed to them.

Aggregator, filter, joiner, normalizer, etc. are a few examples of active transformation in Informatica. An active transformation can perform any actions; change the number of rows that passes through the transformation, change the transaction boundary, change the row type attribute.