If you are preparing for a Business interview, Must go through these questions series.Here you will find latest interview Questions and answers on Business Frameworks and tools.

Business Interview Questions.

Welcome to the best collection of questions of Business Objects with appropriate answers. The list of questions includes various type of short topics that are part of Business Objects such as connections, custom Hierarchies, contexts, data sources and many more. Based on our past year of experience, we picked the questions that are frequently asked in the interviews. Try to spend a couple of minutes on the full article and you will not miss a single topic.

Business Interview Questions

Posted in Business

Sharad Jaiswal

Updated On 22 Dec, 2020

Comments: 20

1) What do you mean by Business Object?

Business Objects are designed for the business professionals to help them for retrieving the data from the corporate databases in such a way that they can access data directly from the desktop. The document created on retrieving data can be analyzed and presented in the formats known as Business Objects. These can be used by higher-level management as an OLAP tool as a major part of the decision-making system. In short, we can say that Business objects can be studied as reporting, integrated analysis, and query who have the purpose to detect solutions for business professionals by gaining data for them.

2) What are the characteristics of Business Objects?

It has some characteristics which are described below:

  • For Querying and Data Analysis: It provides Ad-hoc querying and complex data analysis.
  • For Total cost: It offers total cost of ownership low because of integrated, highly available, scalable and have a BI enterprise platform.
  • For User-Interaction: Data Objects are easy to use and highly user-friendly in just a quick amount of time.

3) List some advantages of using Business Object?

Business Objects have many pros which are shown below:

  • Due to extremely user-friendly, these are easy to use
  • Have a good Graphical User Interface
  • Used those Business Terms that are familiar to Business Professionals.
  • Presents powerful reports for a small amount of time.
  • With the use of Web I, these deploy the document on an enterprise basis
  • Have dragging and dropping option

4) What is Designer and what kind of modes linked with designers and Business Objects?

The designer is the module used by designers for developing and maintaining Universes. It is a set which is related to Business Object IS. Here Universe refers to the semantic layer which keeps apart the end users from the technical issues that may be associated with the databases. But Universe designers have the possibility to make available this to the end users. They can do this by moving these as a file through exporting files to the central repository or through the system of files.

There are mainly two kinds of modes linked with them. These are:

  • Enterprise Mode
  • Workgroup Mode

5) What are the main databases available through which Business Objects can help you to access the data?

Business Objects provides the various data sources to access data from them but you have the possibility to access data from RDBMS. Here are some main databases which use RDBMS:

  • Oracle
  • MySQL
  • IBM DB2

6) What do you understand by Custom Hierarchy and How it can be created?

Custom hierarchies are defined in Universe to provide the drill down which is customized between the objects that may belong to the same or different classes according to the user needs. These can be created by selecting Tools option and after that select Hierarchies in BO designer. (Tools->Hierarchies in BO designer).

Drill Down is the type of procedure-related with multidimensional analysis which is inside the Business Object.

7) What is Chasm Trap and how it can be solved?

The chasm trap is created when the values in the fact tables get inflated. This condition arises in the dimensional schema-based universe because in this we have the only one-dimensional table which is joined with two fact tables. So, when we drag a measure from both fact tables and also dimensions from the dimension table, the value of flat tables that is measured gets inflated. This situation is known as a chasm trap.

Two methods to solve the chasm trap:

  • Using SQL parameters: In the Universe SQL parameters, the option constructs a lot of queries for each measure that needs to be chosen. This helps in generating isolated SQL statements for each measure and provide the correct result.
  • By joining different contexts: The better approach to solving the chasm trap is to include two joins in two different contexts which will generate two synchronized queries and hence the problem will be solved.

8) What is context and how are they created?

It represents the particular join path that can be between the tables or between the specific group of joins for an individual query. The object created on a column table from a specific context is compatible with all the objects from the same context by default. If the objects of different contexts are used, then to avoid any incorrect results or loopholes, separate SQL statements are used and the result from all statements is combined in a micro cube.

Creation of Context: Contexts can be generated manually or by the features available in the Context called as detect Context. In general, the contexts are created on the basis of business requirements or logical calculations, so detect context has not much use. To do manually follow the following steps:

  • Go to Insert Context
  • Provide the name to the context
  • Choose the joins that should be present in a context

Your Context is ready. If you are creating Universe Context, then all the joins must fall in at least one context but shortcut joins can be excluded from the list

9) Write some @functions with their functionality?

There are many @functions used in Business Objects. The list is shown below:

  1. @Aggregate_Aware: This function is used to define one object for measures in fact table. This scenario is caused when we have the same fact table in different grains. The syntax is:
  2. @aggregte_aware(highest_level.lower_level)
  3. @Select: This function is re-used for an existing SELECT statement.
  4. @Where: It is used to find the where clause of an existing object.
  5. @Prompt: This function is used to ask the end user to enter any particular value in a given domain
  6. @Script: It is used to recover the visual basics of applications’ macro’s result.
  7. @Variable: The value assigned to the name or variable will be referenced using @Variable.

10) What is a derived table and write its utility in Business Objects?

As clear from the name, a derived table is the table which is created using an SQL query in a universe from the database level. The columns of the derived table are the columns selected in the query.

Uses of Derived table:

  • It is used for complex calculation while report level can’t use here. These types of calculations done in query level itself.
  • Another use of derived tables is to access tables from a different schema with the help of dblink.

11) Is derived table different from view? If yes, then how? Which one is preferable?

Yes, a derived table is different from view. A derived table is created in-universe while a view is created in a database level.

Generally, the view is preferred over a derived table in cases when the onus of calculation remains in a database and it does not load the BO server. But where the developers have not right to access the database, then a derived table is the only solution there.

12) Define a fan trap. How can be it solved?

In case, if we have three tables in a universe joined in such a way that the first table has a one to many joins with the second one and that second table has a one to many joins with the third table, this situation is known as fan trap. The value of the measure is inflated if the measure in the 2nd table is dragged along with any dimension of the 3rd table.

The solution of fan trap: To solve the condition of fan trap, create an alias of the middle table or 2ns table in such a way that if a normal table is joined with the first table then an alias which is created is joined with both 1st and 3rd table. We will use the other dimensions of the 2nd table from the alias table and the 2nd table’s measure from the normal table.

13) Explain Index awareness in detail and how it is implemented?

Index awareness is the property to assign indexes to the values of the universe. With the index awareness, the values in the filter conditions of the queries providers that are built from the universe are replaced by their corresponding indexes. Usually, the values in the filter conditions arrive from the dimension table and a join is required with the fact table to gain this value. But, if an index awareness is implemented, there is no need for this join and hence this join is eliminated. To get the value, the query filter takes the corresponding index value from the fact table itself.

Implementation: To implement index awareness, first identify the dimension fields which are to be used in a query filter. First, find the keys tab in the edit properties of the object. In this tab, the primary key is defined from the source primary key of the table from which the object is derived. And the relationship of the database columns for all foreign key with the other tables has to be defined here. The universe becomes index aware if this is done for all the required dimensions.

14) What is linked universe and What are the types of the linked universe?

Linked Universes includes those universes which have the property to share the common components such as classes, joins, parameters and joins. The one universe has the role of the core universe and other which is derived is called as a derived universe. The types of the universe are defined below in detail

There are mainly two types of linked universe. These are:

  • Core Universe: It is the main universe to which the other universes are linked. It contains the common components of the derived universes. It represents the re-usable library for the components. Depending on the behavior of components of the core universe, it can be classified as master or kernel universe.
  • Derived Universe: The derived universe contains a link to the core universe which helps derived universe to share the components of core universe. If the linked universe is kernel universe, then components can be added to the derived universe. Otherwise, if it is mater universe, then all the components of the core universe are contained in the derived universe.

15) Write the approaches used for linking universes?

There are three approaches to linking universes. These are:

Kernel Universe Approach:

  • In this approach, the one universe contains the core components which are then added to the derived universe.
  • The derived Universe contains the core components of kernel universe as well as its own specific components.
  • If any change is made in the kernel universe, that change also is shown in the derived universe.

Master Universe Approach:

  • The master universe contains all the possible components
  • Some components can be hidden in the derived universe according to the relevance of target users. And the visible components in the derived universe are always a subset of the master universe.
  • In derived classes, new classes and objects can’t be added.
  • Any change made in Master Universe will be reflected in the derived universe.

Component Universe Approach:

  • The two or more than two universes are merged in one universe and made a single universe.
  • The components of all the universes are combined in a single universe.

16) What do you understand by drill modes and what are various drill modes available? Explain them in detail

Drill modes help to analyze the data from the different angles and different states of details. There is mainly four type of drill modes. These are:

  • Drill Up: This mode takes the user to the one level up in the hierarchy.
  • Drill Down: It has the capability to allows the user to analyze the data deeply by going deeper in the specific layers of that data.
  • Drill by: Using this drill mode, we can analyze the other data that belongs to another hierarchy by moving in that hierarchy.
  • Drill through: Instead of presenting the granular level of data to the user, it shows only the relevant data that being analyzed.

17) What are slice and dice and how are they different from each other?

Slice: It is used to rename, reset and delete the blocks.

Dice: It is used to display and removes the data.

Slice works with Master/detail report while Dice turns the tables and cross tables into charts and vice versa.

18) What are Personal, shared and secured connections and write some differences between them?

  1. Personal Connections: These are created by only one user while other users can’t use.
  2. Shared Connections: In this type of connection, other users can also use my server which is a shared one.
  3. Secured Connection: This type of connections overcomes the limit of above two connections. It has the rights which can be set on documents as well as objects.

Differences: The difference shown by the following table

Personal ConnectionsShared ConnectionsSecured Connections
Created by only one user but others can’t use.Created by one user but can be used by other users using a shared server.Overcome the limitations of personal and shared connections by applying its rights to documents and objects
The Connection details are stored in PDAC.LSI file.The connection details are stored in SDAC.LSI file.The Connection details are stored in a CMS file.
It can’t set rights on documents and objects.It also can’t set rights on documents and objects.It has the rights to implement documents and objects.
Universes can't be exported to the central repository using this connectionHere also, Universes can’t be exported.Using a secured connection, universes can share information through a central repository.

19) Write some products and users that are linked with Business Objects?

There are many products which are related with Business objects and these are:

Set AnalyzerGeneral Supervisor
Info ViewGeneral Designer
Broadcast Agentdesigner
User ModuleEnd User
SupervisorVersatile User
DesignerGraphical Interface

Informatica Interview Questions

Informatica is a software development company formed in 1993 in California, USA. Its core products comprise Enterprise Cloud Data Management and Data Integration it's product is a portfolio concentrated on data integration: extract transform load (ETL), information lifecycle management, business-to-business data transfer, cloud computing integration, complicated event processing, data masking, data quality, data replication, data virtualization, master data management, ultra messaging. To culminate, these elements form a toolset for building and maintaining data warehouses.

Mapreduce Interview Questions

MapReduce is a processing methodology and a programming structure for distributed computing on java and the MapReduce algorithm comprises two important tasks, namely Map and Reduce. The map function takes a collection of data and converts it into another set of data, where specific elements are broken down into tuples (key/value pairs) and then; reduce task, which takes the product from a map as an input and connects those data tuples into a smaller set of tuples. As the flow of the name MapReduce indicates, the reduce task is always executed after the map job.

Hive Interview Questions

Apache Hive is an information repository software built above Apache Hadoop for implementing data query and analysis and provides SQL like interface to retrieve data saved in databases and file systems that combine with Hadoop. Moreover, conventional SQL queries must be executed in the MapReduce Java API to perform SQL applications and queries over distributed data. It gives the necessary SQL abstraction to unite SQL like queries into the underlying Java without the requirement to implement queries in the low-level Java API.

Sqoop Interview Questions

Sqoop is a command-line interface tool for conveying data between Hadoop and relational databases and bolsters loads of a single table as well as preserved jobs that can be operated multiple times to import updates made to a database since the latest import. Moreover, imports can also be employed to populate tables in Hive or HBase and exports can be practiced to set data from Hadoop into a relational database. Sqoop got the name from SQL-to-Hadoop and became a top-level Apache project in 2012.

Data Warehousing Interview Questions

Data warehousing is the process of constructing and managing a data warehouse and is constructed by integrating data from heterogeneous sources that encourage analytical reporting, structured or ad hoc queries, and decision making.

It also involves data cleaning, data integration, and data consolidations and there are decision support technologies that employ the data available in a data warehouse and these technologies help administrators to utilize the warehouse effectively and can gather data, analyze, and make conclusions based on the data in the warehouse. The information gathered in a warehouse can be utilized in any of the domains among; tuning production strategies, customer analysis, and operations analysis.

QlikView Interview Questions

QlikView is a leading business discovery platform and is unique in several ways as compared to the traditional BI platforms and as a data analysis tool, it manages the relationship between the data and the relationship can be seen visually and also shows the data that are unrelated. It renders both direct and indirect explorations by using specific searches in the list boxes.

QlikView's core and patented technology has the feature of in-memory data processing, which gives a superfast result to the users and accounts aggregations on the fly and summarizes data to 10% of the original size and also, neither clients nor developers of QlikView applications control the relationship between data and is handled automatically.

HBase Interview Questions

HBase is an open-source distributed database designed in Java succeeding Google's Bigtable and is a part of the Apache Software Foundation's Apache Hadoop project and runs on above HDFS (Hadoop Distributed File System), rendering Bigtable-like capabilities for Hadoop. Hbase produces a fault-tolerant way of stocking large volumes of sparse data (small amounts of information caught within a large collection of empty, like attaining the 50 largest items among 2 million records). Moreover, HBase features compression and Bloom filters on a per-column basis as described in the original Bigtable paper.

QuickBooks interview questions

QuickBooks, by Intuit, is an accounting application to gear medium businesses and offers accounting administrations and cloud-based variants that allow business payments and payroll functions. Intuit offers a cloud set called QuickBooks Online (QBO) where the user spends a monthly subscription fee and gets patches and constantly upgrades the software automatically but also involves pop-up ads within the application.

Data Analyst Interview Questions

A data analyst is a person who collects, processes and performs statistical analyses on a large dataset and discover how data can be used to answer questions and solve problems. With the advancement of computers and an ever-increasing move toward technological intertwinement, data analysis has evolved. The development of the RDBMS gave a new breath to data analysts, which allowed analysts to use SQL (vocalized as “sequel” or “s-q-l”) to retrieve data from databases.

Hadoop Interview Questions

The Apache Hadoop develops open software for distributed computing and the Hadoop software library is a framework that grants for the distributed processing of bulk data sets across clusters of computers using simple programming styles. Moreover, it is devised to scale up from individual servers to thousands of machines, each offering limited computation and storage, rather than rely on hardware to fulfill high-availability, the library is designed to identify and handle failures at the application layer, so giving a highly-available service on top of a cluster of computers, each of which may be likely to failures.

Cognos Interview Questions

IBM's Cognos BI is a web-based analytic tool helps in data aggregation and the creation of user-friendly detailed reports. Moreover, Cognos extends an option to export the report in XML format and allows us to view the reports in XML format. The main features of Cognos are; in-memory streaming analytics, real-time event alerts, appealing Web 2.0 interface, progressive interaction, search-assisted authoring, wizard-driven external data, automatic access to SAP BW queries, drill-through capability, potential image documentation integration and offers secure data.

Splunk Interview Questions

Tableau interview questions

Tableau is a robust data visualization tool employed in the Business Intelligence industry, which aids in simplifying raw data into an understandable format. Data analysis is efficient with Tableau and the visualizations generated are in the form of dashboards and worksheets and data created using Tableau can be followed by the professional at any level in an organization and it even allows a non-technical client to build a customized dashboard.

Tally interview questions

Tally is an India-based MNC that produces enterprise resource planning software and is headquartered in Karnataka, India. Tally's principal product is its enterprise resource planning and accounting software called Tally.ERP 9. Moreover, for large organizations with multiple branches, Tally.Server 9 is suggested and the software manages accounting, inventory management, tax management, payroll and a lot more.

Teradata Interview Questions

Teradata is an enterprise software company based in California, the US that develops and trades database analytics software subscriptions. The company renders three principal services: business analytics, cloud products, and consulting and operates in North and Latin America, Europe, the Middle East, Africa, and Asia. Moreover, the service uses multi-processing across both its physical and cloud warehouse, which incorporates regulated environments like AWS, Microsoft Azure, VMware, and Teradata's Managed Cloud and IntelliFlex.