Here I have collected some qood interview questions with answers about Informatica that is generally asked. Download Complete Questions in PDF. It will be. Upon completion the Informatica server stores the end value of a variable and is reused when session restarts. Moreover those values that do. Contains Important Informatica Interview Questions with Answers and Informatica and Answers PDF | Informatica eBooks | Informatica | What is Informatica?.
|Language:||English, Spanish, Portuguese|
|Genre:||Fiction & Literature|
|Distribution:||Free* [*Register to download]|
Frequently asked Informatica Interview Questions with detailed answers and examples. Tips and Tricks for cracking Informatica interview. Happy Informatica job. Informatica Interview Questions and ruthenpress.info - Download as Word Doc . doc /.docx), PDF File .pdf), Text File .txt) or read online. Informatica Interview Questions | Advanced Technical Topics | For freshers & Professionals | Free Practice Test Informatica Interview Questions And Answers.
The Informatica Powercenter Partitioning Option increases the performance of the Powercenter through parallel data processing. The Partitioning option will let you split the large data set into smaller subsets which can be processed in parallel to get a better session performance.
Explain shared cache and re cache.
To answer this question, it is essential to understand persistence cache. If we are performing lookup on a table, it looks up all the data brings it inside the data cache. However, at the end of each session, the Informatica server deletes all the cache files. If you configure the lookup as a persistent cache, the server saves the lookup under an anonymous name. Shared cache allows you to use this cache in other mappings by directing it to an existing cache.
After a while, data in a table becomes old or redundant. In a scenario where new data enters the table, re cache ensures that the data is refreshed and updated in the existing and new cache.
I hope this Informatica Interview questions blog was of some help to you. We also have another Informatica Interview questions wherein scenario based questions have been compiled. It tests your hands-on knowledge of working on Informatica tool. You can go through that Scenario based Informatica Interview Questions blog by clicking on the hyperlink or by clicking on the button at the right hand corner.
What is an Expression Transformation in Informatica? An expression transformation in Informatica is a common Powercenter mapping transformation.
It is used to transform data passed through it one record at a time. The expression transformation is passive and connected. Within an expression, data can be manipulated, variables created, and output ports generated. We can write conditional statements within output ports or variables to help transform data according to our business requirements. How to Delete duplicate row using Informatica? Scenario 1: Duplicate rows are present in relational database.
Suppose we have Duplicate records in Source System and we want to load only the unique records in the Target System eliminating the duplicate rows. Reads the data from XMl files. XML files are case sensitive markup language.
Files are saved with an extension. XML files are hierarchical or parent child relationship file formats. Files can be normalized or denormalized. What is Informatica Power Center? Having many products in informatica.
Informatica power center is one of the product of informatica.
Using informatica power center we will do the Extraction, transformation and loading. What is meant by active and passive transformation? An active transformation is the one that performs any of the following actions: Change the number of rows between transformation input and output. Filter transformation Change the transaction boundary by defining commit or rollback points.
Expression transformation. We can configure a Lookup transformation to cache the underlying lookup table. In case of static or read-only lookup cache the Integration Service caches the lookup table at the beginning of the session and does not update the lookup cache while it processes the Lookup transformation. In case of dynamic lookup cache the Integration Service dynamically inserts or updates data in the lookup cache and passes the data to the target.
The dynamic cache is synchronized with the target. In case you are wondering why do we need to make lookup cache dynamic, read this article on dynamic lookup What is the expected value if the column in an aggregator transform is neither a group by nor an aggregate expression? Integration Service produces one row for each group based on the group by ports. The columns which are neither part of the key nor aggregate expression will return the corresponding value of last record of the group received.
However, if we specify particularly the FIRST function, the Integration Service then returns the value of the specified first row of the group. So default is the LAST function. How does a Sorter Cache works? The Integration Service passes all incoming data into the Sorter Cache before Sorter transformation performs the sort operation. The Integration Service uses the Sorter Cache Size property to determine the maximum amount of memory it can allocate to perform the sort operation.
If it cannot allocate enough memory, the Integration Service fails the session. For best performance, configure Sorter cache size with a value less than or equal to the amount of available physical RAM on the Integration Service machine.
If the amount of incoming data is greater than the amount of Sorter cache size, the Integration Service temporarily stores data in the Sorter transformation work directory. The Integration Service requires disk space of at least twice the amount of incoming data when storing data in the work directory. What is a Union Transformation? The Union transformation is an Active, Connected non-blocking multiple input group transformation use to merge data from multiple pipelines or sources into one pipeline branch.
What is the difference between Router and Filter? Following differences can be noted:. Router Filter Router transformation divides the incoming records into multiple groups based on some condition. Such groups can be mutually inclusive Different groups may contain same record Filter transformation restricts or blocks the incoming record set based on one given condition.
Router transformation itself does not block any record. If a certain record does not match any of the routing conditions, the record is routed to default group Filter transformation does not have a default group. What will be the approach? Assuming that the source system is a Relational Database, to eliminate duplicate records, we can check the Distinct option of the Source Qualifier of the source table and load the target accordingly.
But what if the source is a flat file? Then how can we remove the duplicates from flat file source? Scenario 2: Here since the source system is a Flat File you will not be able to select the distinct option in the source qualifier as it will be disabled due to flat file source table. Hence the next approach may be we use a Sorter Transformation and check the Distinct option. When we select the distinct option all the columns will the selected as keys, in ascending order by default.
Other ways to handle duplicate records in source batch run is to use an Aggregator Transformation and using the Group By checkbox on the ports having duplicate occurring data. Here you can have the flexibility to select the last or the first of the duplicate column value records. Differences between connected and unconnected lookup? Connected Lookup Unconnected Lookup 1. Part of the mapping dataflow 1. Separate from the mapping data flow. Returns multiple values by linking output ports to another transformation 2.
Return one value by checking the Return R port option for the outport that provides the return value. Executed for every record passing through the transformation. Only executed when the lookup function is called.
More visible, shows where the lookup values are used. Less visible, as the lookup is called from an expression within another transformation 5. Default values are used. Default values are ignored.
It is a command based client program that communicates with integration service to perform some of the tasks which can also be performed using workflow manager client. Interactive Mode. Command line Mode. What is Workflow Manager? A session is a set of instructions that tells ETL server to move the data from source to destination. Mention few Power Centre client applications with their basic purpose? Tasks like session and workflow creation, monitoring workflow progress, designing mapplets, etc is performed by Powercentre client applications.
Repository Manager: It is an administrative tool and its basic purpose is to manage repository folders, objects, groups etc. Administration Console: Power center designer: The designer consists of various designing tools which serve various purposes.
These designing tools are: To help develop a workflow, there are 3 tools available, namely Task developer, Workflow designer, Worklet Designer. Workflow Monitor: As the name suggests, Workflow monitor, monitors the workflow or tasks. The list of windows available are: Why do we need Informatica?
Informatica comes to the picture wherever we have a data system available and at the backend we want to perform certain operations on the data.
It can be like cleaning up of data, modifying the data, etc. Informatica offers a rich set of features like operations at row level on data, integration of data from multiple structured, semi-structured or unstructured systems, scheduling of data operation. It also has the feature of metadata, so the information about the process and data operations are also preserved. How we can confirm all mappings in the repository simultaneously?
At a time we can validate only one mapping. Hence mapping cannot be validated simultaneously. What is Expression transformation? It is used for performing non aggregated calculations. We can test conditional statements before output results move to the target tables. What are the different Components of PowerCenter? Given below are the 7 important components of PowerCenter:. What happens to a mapping if we alter the datatypes between Source and its corresponding Source Qualifier?
The Source Qualifier transformation displays the transformation datatypes. The transformation datatypes determine how the source database binds data when the Integration Service reads it.
Now if we alter the datatypes in the Source Qualifier transformation or the datatypes in the source definition and Source Qualifier transformation do not match, the Designer marks the mapping as invalid when we save it. Skip to content 1. Given below are the 4 types of tracing level: Below are the differences between lookup and joiner transformation: We have the following types of Lookup. The lookup transformation is created with the following type of ports: A passive transformation is one which will satisfy all these conditions: A couple of the advantages are: Duplicate rows are present in relational database Suppose we have Duplicate records in Source System and we want to load only the unique records in the Target System eliminating the duplicate rows.
Informatica Analyzer. Business Intelligence. Informatica - I. DWH Basic. Discuss Ravi Ranjan , says Feb 10, Aggregator is active transformation.
Discuss Ravi, says Feb 11, Update strategy transformation having All Properties in Data driven Data driven e. In sequential batch one session will end then the other will begin. Where as in the concurrent. Batch the sessions will run simultaneously depending on the CPU availability. Discuss Ravi , says Feb 07, Normal Verbose Verbose init Verbose data.