If you are preparing for a Business interview, Must go through these questions series.Here you will find latest interview Questions and answers on Business Frameworks and tools.
Welcome to the best collection of questions of Business Objects with appropriate answers. The list of questions includes various type of short topics that are part of Business Objects such as connections, custom Hierarchies, contexts, data sources and many more. Based on our past year of experience, we picked the questions that are frequently asked in the interviews. Try to spend a couple of minutes on the full article and you will not miss a single topic.
Never Miss an Articles from us.
Posted in Business
Business Objects are designed for the business professionals to help them for retrieving the data from the corporate databases in such a way that they can access data directly from the desktop. The document created on retrieving data can be analyzed and presented in the formats known as Business Objects. These can be used by higher-level management as an OLAP tool as a major part of the decision-making system. In short, we can say that Business objects can be studied as reporting, integrated analysis, and query who have the purpose to detect solutions for business professionals by gaining data for them.
It has some characteristics which are described below:
Business Objects have many pros which are shown below:
The designer is the module used by designers for developing and maintaining Universes. It is a set which is related to Business Object IS. Here Universe refers to the semantic layer which keeps apart the end users from the technical issues that may be associated with the databases. But Universe designers have the possibility to make available this to the end users. They can do this by moving these as a file through exporting files to the central repository or through the system of files.
There are mainly two kinds of modes linked with them. These are:
Business Objects provides the various data sources to access data from them but you have the possibility to access data from RDBMS. Here are some main databases which use RDBMS:
Custom hierarchies are defined in Universe to provide the drill down which is customized between the objects that may belong to the same or different classes according to the user needs. These can be created by selecting Tools option and after that select Hierarchies in BO designer. (Tools->Hierarchies in BO designer).
Drill Down is the type of procedure-related with multidimensional analysis which is inside the Business Object.
The chasm trap is created when the values in the fact tables get inflated. This condition arises in the dimensional schema-based universe because in this we have the only one-dimensional table which is joined with two fact tables. So, when we drag a measure from both fact tables and also dimensions from the dimension table, the value of flat tables that is measured gets inflated. This situation is known as a chasm trap.
Two methods to solve the chasm trap:
It represents the particular join path that can be between the tables or between the specific group of joins for an individual query. The object created on a column table from a specific context is compatible with all the objects from the same context by default. If the objects of different contexts are used, then to avoid any incorrect results or loopholes, separate SQL statements are used and the result from all statements is combined in a micro cube.
Creation of Context: Contexts can be generated manually or by the features available in the Context called as detect Context. In general, the contexts are created on the basis of business requirements or logical calculations, so detect context has not much use. To do manually follow the following steps:
Your Context is ready. If you are creating Universe Context, then all the joins must fall in at least one context but shortcut joins can be excluded from the list
There are many @functions used in Business Objects. The list is shown below:
As clear from the name, a derived table is the table which is created using an SQL query in a universe from the database level. The columns of the derived table are the columns selected in the query.
Uses of Derived table:
Yes, a derived table is different from view. A derived table is created in-universe while a view is created in a database level.
Generally, the view is preferred over a derived table in cases when the onus of calculation remains in a database and it does not load the BO server. But where the developers have not right to access the database, then a derived table is the only solution there.
In case, if we have three tables in a universe joined in such a way that the first table has a one to many joins with the second one and that second table has a one to many joins with the third table, this situation is known as fan trap. The value of the measure is inflated if the measure in the 2nd table is dragged along with any dimension of the 3rd table.
The solution of fan trap: To solve the condition of fan trap, create an alias of the middle table or 2ns table in such a way that if a normal table is joined with the first table then an alias which is created is joined with both 1st and 3rd table. We will use the other dimensions of the 2nd table from the alias table and the 2nd table’s measure from the normal table.
Index awareness is the property to assign indexes to the values of the universe. With the index awareness, the values in the filter conditions of the queries providers that are built from the universe are replaced by their corresponding indexes. Usually, the values in the filter conditions arrive from the dimension table and a join is required with the fact table to gain this value. But, if an index awareness is implemented, there is no need for this join and hence this join is eliminated. To get the value, the query filter takes the corresponding index value from the fact table itself.
Implementation: To implement index awareness, first identify the dimension fields which are to be used in a query filter. First, find the keys tab in the edit properties of the object. In this tab, the primary key is defined from the source primary key of the table from which the object is derived. And the relationship of the database columns for all foreign key with the other tables has to be defined here. The universe becomes index aware if this is done for all the required dimensions.
Linked Universes includes those universes which have the property to share the common components such as classes, joins, parameters and joins. The one universe has the role of the core universe and other which is derived is called as a derived universe. The types of the universe are defined below in detail
There are mainly two types of linked universe. These are:
There are three approaches to linking universes. These are:
Kernel Universe Approach:
Master Universe Approach:
Component Universe Approach:
Drill modes help to analyze the data from the different angles and different states of details. There is mainly four type of drill modes. These are:
Slice: It is used to rename, reset and delete the blocks.
Dice: It is used to display and removes the data.
Slice works with Master/detail report while Dice turns the tables and cross tables into charts and vice versa.
Differences: The difference shown by the following table
|Personal Connections||Shared Connections||Secured Connections|
|Created by only one user but others can’t use.||Created by one user but can be used by other users using a shared server.||Overcome the limitations of personal and shared connections by applying its rights to documents and objects|
|The Connection details are stored in PDAC.LSI file.||The connection details are stored in SDAC.LSI file.||The Connection details are stored in a CMS file.|
|It can’t set rights on documents and objects.||It also can’t set rights on documents and objects.||It has the rights to implement documents and objects.|
|Universes can't be exported to the central repository using this connection||Here also, Universes can’t be exported.||Using a secured connection, universes can share information through a central repository.|
There are many products which are related with Business objects and these are:
|Set Analyzer||General Supervisor|
|Info View||General Designer|
|User Module||End User|
Informatica is a software development company formed in 1993 in California, USA. Its core products comprise Enterprise Cloud Data Management and Data Integration it's product is a portfolio concentrated on data integration: extract transform load (ETL), information lifecycle management, business-to-business data transfer, cloud computing integration, complicated event processing, data masking, data quality, data replication, data virtualization, master data management, ultra messaging. To culminate, these elements form a toolset for building and maintaining data warehouses.
MapReduce is a processing methodology and a programming structure for distributed computing on java and the MapReduce algorithm comprises two important tasks, namely Map and Reduce. The map function takes a collection of data and converts it into another set of data, where specific elements are broken down into tuples (key/value pairs) and then; reduce task, which takes the product from a map as an input and connects those data tuples into a smaller set of tuples. As the flow of the name MapReduce indicates, the reduce task is always executed after the map job.
Apache Hive is an information repository software built above Apache Hadoop for implementing data query and analysis and provides SQL like interface to retrieve data saved in databases and file systems that combine with Hadoop. Moreover, conventional SQL queries must be executed in the MapReduce Java API to perform SQL applications and queries over distributed data. It gives the necessary SQL abstraction to unite SQL like queries into the underlying Java without the requirement to implement queries in the low-level Java API.
Sqoop is a command-line interface tool for conveying data between Hadoop and relational databases and bolsters loads of a single table as well as preserved jobs that can be operated multiple times to import updates made to a database since the latest import. Moreover, imports can also be employed to populate tables in Hive or HBase and exports can be practiced to set data from Hadoop into a relational database. Sqoop got the name from SQL-to-Hadoop and became a top-level Apache project.
Data warehousing is the process of constructing and managing a data warehouse and is constructed by integrating data from heterogeneous sources that encourage analytical reporting, structured or ad hoc queries, and decision making.
It also involves data cleaning, data integration, and data consolidations. There are decision support technologies that employ the data available in a data warehouse. These technologies help administrators utilize the warehouse effectively, gather data, analyze, and make conclusions based on the data in the warehouse. The information gathered in a warehouse can be utilized in any of the domains among; tuning production strategies, customer analysis, and operations analysis.
QlikView is a leading business discovery platform and is unique in several ways as compared to the traditional BI platforms and as a data analysis tool, it manages the relationship between the data and the relationship can be seen visually and also shows the data that are unrelated. It renders both direct and indirect explorations by using specific searches in the list boxes.
QlikView's core and patented technology has the feature of in-memory data processing, which gives a superfast result to the users and accounts aggregations on the fly and summarizes data to 10% of the original size and also, neither clients nor developers of QlikView applications control the relationship between data and is handled automatically.
HBase is an open-source distributed database designed in Java succeeding Google's Bigtable and is a part of the Apache Software Foundation's Apache Hadoop project and runs on above HDFS (Hadoop Distributed File System), rendering Bigtable-like capabilities for Hadoop. Hbase produces a fault-tolerant way of stocking large volumes of sparse data (small amounts of information caught within a large collection of empty, like attaining the 50 largest items among 2 million records). Moreover, HBase features compression and Bloom filters on a per-column basis as described in the original Bigtable paper.
QuickBooks, by Intuit, is an accounting application to gear medium businesses and offers accounting administrations and cloud-based variants that allow business payments and payroll functions. Intuit offers a cloud set called QuickBooks Online (QBO) where the user spends a monthly subscription fee and gets patches and constantly upgrades the software automatically but also involves pop-up ads within the application.
A data analyst is a person who collects, processes and performs statistical analyses on a large dataset and discover how data can be used to answer questions and solve problems. With the advancement of computers and an ever-increasing move toward technological intertwinement, data analysis has evolved. The development of the RDBMS gave a new breath to data analysts, which allowed analysts to use SQL (vocalized as “sequel” or “s-q-l”) to retrieve data from databases.
The Apache Hadoop develops open software for distributed computing and the Hadoop software library is a framework that grants for the distributed processing of bulk data sets across clusters of computers using simple programming styles. Moreover, it is devised to scale up from individual servers to thousands of machines, each offering limited computation and storage, rather than rely on hardware to fulfill high-availability, the library is designed to identify and handle failures at the application layer, so giving a highly-available service on top of a cluster of computers, each of which may be likely to failures.
IBM's Cognos BI is a web-based analytic tool helps in data aggregation and the creation of user-friendly detailed reports. Moreover, Cognos extends an option to export the report in XML format and allows us to view the reports in XML format. The main features of Cognos are; in-memory streaming analytics, real-time event alerts, appealing Web 2.0 interface, progressive interaction, search-assisted authoring, wizard-driven external data, automatic access to SAP BW queries, drill-through capability, potential image documentation integration and offers secure data.
Tableau is a robust data visualization tool employed in the Business Intelligence industry, which aids in simplifying raw data into an understandable format. Data analysis is efficient with Tableau and the visualizations generated are in the form of dashboards and worksheets and data created using Tableau can be followed by the professional at any level in an organization and it even allows a non-technical client to build a customized dashboard.
Tally is an India-based MNC that produces enterprise resource planning software and is headquartered in Karnataka, India. Tally's principal product is its enterprise resource planning and accounting software called Tally.ERP 9. Moreover, for large organizations with multiple branches, Tally.Server 9 is suggested and the software manages accounting, inventory management, tax management, payroll and a lot more.
Teradata is an enterprise software company based in California, the US that develops and trades database analytics software subscriptions. The company renders three principal services: business analytics, cloud products, and consulting and operates in North and Latin America, Europe, the Middle East, Africa, and Asia. Moreover, the service uses multi-processing across both its physical and cloud warehouse, which incorporates regulated environments like AWS, Microsoft Azure, VMware, and Teradata's Managed Cloud and IntelliFlex.