Thursday 7 April 2011

OBIEE Presentation Layer Design Principles/Best Practices

www.obieefans.com

Principle 1.Subject Areas
Do not include all presentation layer objects in a single subject area (Presentation Catalog). While it is possible but still not recommended approach.
Principle 2. Development with End Users in Mind
Always develop the presentation layer so that end users are able to understand and use it.
Principle 3. Role Based Subject Area.
It is always good idea to create presentation catalog to suit the needs of individual users or user types. For ex. create a separate presentation catalog for sales managers because they need to see only an overview of an organization.
Principle 4. Presentation Tables

Presentation Tables should consistent across presentation catalog.
  • List Dimension tables first.
  • Do not mix dimension and fact columns in the same table.
  • Apply consistent ordering and naming conventions for tables and columns across catalogs.
  • Include the words “measure” or “fact” in the names of the fact presentation tables.
Principle 5. Rule of Seven
Keep presentation catalogs small and easy to understand by limiting the number of tables to seven.
Keep Presentation catalogs small and easy to understand by limiting the number of columns to seven.
Principle 6. Fact Tables
Try to limit the presentation catalogs so that there are no more than a couple of fact presentation tables in each catalog. It is important to avoid situation in which there are multiple join paths existing within one presentation catalog.
Principle 7. Implicit Fact Columns
If you have multiple fact presentation tables, it is always best practice to assign an implicit fact column. Implicit facts come into play when an Answers report contains columns from more than one logical dimension table and no explicit facts.
Principle 8. Canonical Time
Canonical time is a useful way for allowing users to report for a specific period of time across multiple star schema. If you use canonical time ,make sure that the corresponding time presentation table is given a very generic  name and that name is consistent across all the presentation catalogs.
It should be the first table in a catalog.
Principle 9. Secondary Time Dimension
This function enables users to build time reports for specific star schema.
Secondary time dimensions can be given their own presentation tables further down the list.We can place all secondary time dimension objects into a single presentation table.
Principle 10. Nesting Presentation Table
Prefix presentation table names with a hyphen to group common objects together into sub folders.
Principle 11. Presentation Object Names & Description
  • Use the Alisa tab to keep track of prior names.
  • Use the default option that synchronize the presentation column name with the underlying logical column name.
  • Use only logical, business -oriented names ( rather than physical object names) in the presentation Layer.

OBIEE BMM Layer Design Principles/Best Practices

www.obieefans.com 


OBIEE BMM Layer Design Principles / Best Practices
There are certain design principles or best practices oracle suggest in designing the OBIEE repository. If you open any standard/sample repository provided by oracle you will understand what exactly I am talking about
Today we will discuss the design principles for BMM Layer.
Principle 1. Use Multi User Development Environment
Use the Multi- User Development  facility if there are multiple developers. Multiple developers to connect “online” to the same repository file and making changes is not recommended.
Multi User Development allows user to define a series of projects within the repository file ,where each project is a subset of the entire repository .If developers want to make changes , they can check out a project to a local machine make and test the changes,and then check the modifications back into the master repository file.
Principle 2.  Always run Global Consistency Check before releasing a repository.
Whenever we make changes to a repository ,always be sure to run Global consistency check. It is bad practice to release a repository that still contains consistency check errors. In some cases, consistency errors prevent Oracle BI Server from loading the repository. Use the Consistency check manager to identify and debug check messages.
Principle 3.  Separate Business Model
Even if you have only a single data source or schema in the physical layer, or you have only one physical data source for the repository, it is still good practice to break out the physical objects into multiple business models in the BMM layer to represent the independent areas of functionality.
Principle 4. Logical Tables
When building logical tables, do not merge multiple dimension tables into a single logical dimension table,and do not merge multiple fact tables into a single logical fact table.
Having multiple logical fact tables also makes it easier to create well defined projects for Multi User development.
It is also a good practice to prefix logical table names with either Dim-, Fact- ,or Fact Compound -.
This allows you to easily see how the tables are being used. It also groups the tables in the business model, so that facts are groups with facts, dimensions with dimensions  and so on.
Principle 5. Time Dimension
There are few things to keep in mind in time dimension factor.
  • Always must ensure that time Dimension hierarchy is built correctly and the logical level of each time- logical table source is set correctly
  • If there are multiple time dimensions within the business model, for consistency, make sure that all time dimension logical table contains the same columns and general structure. This is good for reporting purpose.
Principle 6. Logical Fact & Dimension table columns
  • Always assign a primary key for logical dimension tables. All logical dimension columns should be renamed in a way that is meaningful to users.
  • Bring only required columns in to the BMM layer for reporting.
  • Do not assign logical primary key for logical fact tables.
  • Create meaningful name for measures
  • Set aggregation rule for every logical fact columns.
  • Create “dummy” measures to group facts.
Principle 7. Logical Joins
Use only logical(complex ) joins in BMM layer. And always accept default properties when creating joins.

Principle 8. Calculated Measure
Be very careful when building calculated measures.
  • Use logical columns for calculations that require an aggregation rule that is applied before the calculation.
  • Use physical columns for calculations that require an aggregation rule to be applied after the calculation.
Principle 9. Aggregates
Few important things to keep in mind about Aggregates.
  • Try to ensure that each aggregate table has an effective summary ration with underlying detail.
  • Ensure that the logical level of every aggregate logical table source is set correctly.
  • Always test to ensure that aggregates tables are being used as expected.
  • If an aggregates is not used ,try changing the number of elements on one of the related logical dimension levels.
Principle 10. Dimension Hierarchies
In Dimension Hierarchies few things  are very important to keep in mind
  • It is best practice to create dimensional hierarchy for every logical dimension table in BMM layer.
  • All Dimension must have at least two levels : the total level and detail level.
  • If you are creating Dimensional Hierarchy manually, be sure to check Grand total level for the Total Level.
  • Use Update Row Counts or Estimate Levels to set the number of elements for every level of every Dimension Hierarchy.
  • Think about the experience of user when enabling drill down.
Principle 11. Avoid Snowflake schema
When there is Snowflaking in physical model,We should try to avoid Snowflaking in BMM layer and build models that use only star schema .

Use WHERE clause filters to help avoid using opaque views or complex joins in the physical layer.

OBIEE Physical Layer Design Principles/Best Practices

There are certain design principles or best practices oracle suggest in designing the OBIEE repository. If you open any standard/sample repository provided by oracle you will understand what exactly I am talking about
Today we will discuss the design principles for physical layers
Principle 1 :Import only needed tables in physical layer and do not create tables manually in physical layer.
Import Tables
Import Tables
Reasons to do that :Its easier to import the table then creating it manually.In that way we are avoiding the table name , column name , data type mis match possibility. Import only those tables which you need. If you need more tables at later stage in project, you can always import them.
You will be using connection pool to import the table and that helps to make sure that connectivity to source database system is working fine.
Principle 2 Create alias for the physical tables
Physical Layer Alias Table
Physical Layer Alias Table
Reasons to do that :
It helps in avoiding the circular joins.
It helps when we have confirm dimension(e.f time dimension) and we want to join dimension different way
Its easier to give the access of objects to users in case of Multi User Development Environment
Alias naming convention helps in easily finding the fact and dimension table in physical layer
We can import the key relationship in physical layer from database and maintain the relationship which is required among the alias tables
Principle 3 Set the cache  property of the physical table
Cache Property of the tables
Cache Property of the tables
Reasons to do that
Its better to set the caching at physical table level and not at the alias table level.This applies to most of the scenarios
It your data source is OLTP system make sure you disable to caching. As  that changing frequency would be high and you do not want to cache results.
For all the data warehouse sources have better cache persistence time set
Principle 4 connection pool(s) and its properties
Physical Layer Multiple connection Pool
Physical Layer Multiple connection Pool
Reasons to do this.
To have multiple connection pool for difference purpose. e.g for usage tracking, security implementation etc it better to have different connection pools.
COnnection Pool Property
COnnection Pool Property
Call interface: Use native call interface for the connection(e.g OCI10g) etc based on source database. Its faster then generic ODBC connection.
set connection pool , time out ,maximum connections etc based on server configurations and usage
connection pooling- it avoids the database connection overhead. If users run the same query it users the same connection from the pool. What does this mean is user sessions share the connection in this.
Maximum Connection :
Max what it should be :Approx 1024 kb of server memory is consumed in each connection.So tThis has to be set per server configuration and requirement.
Min what it should be: 25% of max users times the reports they execute.
e.g if you have 1000 users accessing around 10 reports  in dashboard that this number should 250 ( 25% of 1000) * 10 = 2500

“Account is locked by OCI call ... Could not connect to Oracle database”

 www.obieefans.com

Q: When I try to run a query in Answers I get “the account is locked at OCI call OCILogon”?

This can happen for various reasons – for example, when trying to rename a repository that is currently online.  You may get the error message:

*  ORA-28000: the account is locked at OCI call OCLogon. [nQSError: 17014] Could not connect to Oracle database. (HY000)

or even just the “Could not connect” component of it.  What’s surprising is that the account that is locked has nothing to do with OBIEE.  It’s the Oracle database schema used by the “Connection Pool” within the repository – account “sh” in the case of the tutorials.

Logon to SQL*Plus as “sys” or “system” and issue the command:

*  alter user <user name> account unlock;

Unfortunately, this error seems to tie the entire OBIEE configuration up in “knots”, so you may well have to reboot the system to get “Answers” up and working again.

OBIEE not working

www.obieefans.com 

When you boot the system sometimes the “Automatic” processes start, and sometimes they don’t.  Stop a process, change a parameter, and sometimes you can’t get it to restart.

Whenever you’re stuck make sure you do the following.  First turn any firewalls or anti-virus checkers off: it's possible for OBIEE to work perfectly well with these until an error occurs; but in order to clear the error you may have to turn them off.  Then set all the BI processes to “Manual”, perform a hard-reboot of the system, and then start the processes up in the following order:

*  Oracle BI Server
*  Oracle BI Presentation Services
*  Oracle BI Java Host
*  Oracle BI Scheduler
*  Oracle BI Cluster Controller (only in a cluster configuration)

If this doesn’t work then repeat the hard-reboot several times – it's quite common to find, say, that the first two reboots don't work, but the system comes up on the third attempt!

If rebooting doesn’t work then restore all the configuration files to their installation values after saving the current configuration files – you were, of course, wise enough to keep copies of the original configuration files, now weren’t you?  If OBIEE works with the original configuration files, then there’s something about the new configuration files that is causing the problem.  Re-implement them in stages until you find the source of the problem.

But if OBIEE doesn’t work with the original configuration files, then the problem is due to some change to the Windows, OBIEE, or other application software installed on the machine.  Restore all software and configuration files that were present when you first installed OBIEE (and it worked) – from the image backup you were prescient enough to make at the time.  Then incrementally add the changes to the software that you’ve made since the OBIEE installation.  When you’ve found the software change that’s causing the problem, then restore the system to its current configuration – using the image backup you’ve just made of the current configuration, of course.  Then fix the software problem in the current version.


“SQL Statement preparation failed”

www.obieefans.com 


Q: When I try to run the usage tracking request in Presentation Services I get an error, “SQL Statement preparation failed”?

Another little Oracle oopsie.

This bug takes a little bit of detective work.  Now you may remember that towards the beginning of the tutorial you were asked to change the name of “LOGIN” to “USER_NAME” in script “SQLServer_create_nQ_UserGroup.sql” since “LOGIN” is an Oracle reserved word (despite the fact that you’ve just run several Oracle specific setup scripts, you now have to modify a script intended for SQLServer, which says wonders about the care Oracle has taken in preparing and testing the contents of its OBIEE distribution).  In any case, with this change in place the view created by the script, “NQ_LOGIN_GROUP”, has two columns “USER_NAME” and “RESP”.

If you open the repository “sh_usagetracking.rpd” with the Administration Tool and, in the Physical layer, navigate to “OBI Usage Tracking => Catalog => dbo” you’ll find that “NQ_LOGIN_GROUP” also has two columns.  One is “RESP”, but the other is ... can you guess ... “LOGIN” – not “USER_NAME”.  So the BI Server is asking the database for the value of column “LOGIN”, but the database is saying in response, “Sorry, Guv, we don’t hav’ one of them”, or, in other words, “SQL Statement preparation failed”.

So while the tutorial has asked you to update the column in the view creation script, it has supplied you with a repository in which the corresponding change has not been made, and the instructions for editing the repository in the tutorial fail to mention that this change needs to be made.  So just rename column “LOGIN” to “USER_NAME”, save the repository, and restart the BI Server.  Then, fingers crossed, all will be well.

“Unable to connect to port 9705 on machine localhost”

www.obieefans.com  
 

Q: When I click OK in the Machine Name window I get an “Unable to connect to port 9705 on machine localhost”?

The instructions early on in the tutorial ask you to start the BI Presentation Services and the BI Server, but they forget to tell you to start the BI Scheduler – that’s what the Job Manager was expecting to find on port 9705, but didn’t.

“SH.zip” for “MUDE” tutorial has gone “AWOL”

www.obieefans.com  
 

Q: I can’t download file SH.zip for the MUDE (Multi User Development Environment) tutorial.  I’m getting “file not found”?

Well, yes, it seems that this particular “SH.zip” has gone “AWOL” at present.  Fortunately, the “SH.zip” file that you need for this tutorial is the same as that used for tutorial one, “Creating Interactive Dashboards and Using Oracle Business Intelligence Answers”, and for tutorial three, “Installing the Oracle Business Intelligence Suite Enterprise Edition on Windows”, even though it comes ... or would come if the link wasn’t broken ... from a different location.

Could not connect due to missing username/password

www.obieefans.com 


Q: When I use the repository from the second half of the repository tutorial and I try to update the row counts it says it can’t connect due to an invalid username/password?

The repository supplied with this tutorial assumes that the password for account “sh” is “sh”.  But it’s quite likely that your DBA has changed the password from the default when he unlocked the account during the installation of the database.  If this is the case change the password for the connection to match the one you currently use.  Expand the “SH” node in the “Physical Layer”, select “Connection Pool”, right-click “Properties”, and set the “Password” field to the appropriate value.

Global Consistency Check: no warnings for missing keys

www.obieefans.com 


Q: When I do the global consistency check in the repository tutorial I don’t get the warnings about no keys being defined as in the tutorial.  If I look in the physical layer then the tables don’t have keys so why are the warnings missing?

They’re missing because by default the global consistency check doesn’t test for “Best Practice” messages – something the tutorial forgot to tell you.  To enable these messages, from the “Consistency Check Manager” (where the warnings and messages are displayed), select the “Options” tab, click on the “Best Practice” node, and then press the “Enable” button at the bottom of the screen.  Click on the “Messages” tab, and then on the “Check All Objects” button.  Now the two “Best Practice” messages about the missing keys should appear.

Global Consistency Check: Database features do not match the defaults

www.obieefans.com 


Q: When I do the global consistency check in the repository tutorial I get a warning saying that the database features do not match the defaults.  What does this mean?

Even if you have followed the tutorial correctly, you will get the warning message:

*  [39028] The features in Database ‘SH’ do not match the defaults. This can cause query problems.

when you do a global consistency check.  This warning is not shown or discussed in the tutorial.

The BI Server has to be able to generate optimized SQL for a wide variety of data sources.  To do this it needs to know what features each data source, typically a database, possesses; for example, some data sources will support an “outer join” and some will not.  This information is called the set of “default database features” (it’s stored in configuration file “DBFeatures.ini” in directory “<oraclebi home>\server\config”).  If you right-click on “SH” in the “Physical Layer”, select “Properties”, and then the “Features” tab you’ll see the list of default checkboxes, ticked or unticked, under the “Default” column heading on the right hand side.  Next to it will be another similar column with a heading of “Value”.  It’s possible to override the defaults by checking or unchecking the value checkboxes.  If the two lists don’t match then you’ll get the above error message when you do a global consistency check.  If the current values are more restrictive than the defaults then queries will work, but they may not be as performance efficient as they might be.  If the current values are less restrictive than the defaults then the query sent to the database might be unsupported, leading to an error or to an invalid result set.  So unless there is a very good reason, the current values should match the defaults (one reason for changing them might be that minor releases of the same database version might support different features or have bugs in specific features).  The “Revert to defaults” button at the bottom of the screen can be used to reset the current values to the defaults, and eliminate the warning message.

Now to explain why you are getting the error message, and why there is an issue of far greater importance that is missing from the tutorial.  Edit file “DBFeatures.ini” using “Notepad”.  Search for “ORACLE_80”.  Note in the section beginning “IS_LEFT_OUTER_JOIN_SUPPORTED” that the left outer join is the only type of outer join supported.  Now search for “ORACLE_10g_R2” and you’ll see that all four types of outer join are supported.

Right click on the “SH” node in the “Physical Layer” and select “Properties”.  Select the “General” tab.  You’ll see the value of “Database” listed as “Oracle 8i”.  Now click on the “Features” tab.  You’ll see that the join attributes, both current and default, correspond to an “Oracle 8i” Database.  If you scroll down the list of features you’ll see a mismatch in places, such as in “ODBC API Conformance”.  The import uses the ODBC data source and uses some of the information it provides to update some of the current values, leading to a mismatch with the defaults – which explains why you are getting the warning message.  Press “Revert to defaults”, press “OK”, and then retry the global consistency check.  You’ll find that the warning message has disappeared.

If you’ve followed the above you should have a question, and it’s a very important one with regards to performance.  If you were to use this repository as a production repository – as can happen – then you would be restricting the SQL sent to your Oracle 10g/11g database to Oracle 8i SQL constructs, which would lead to a significant degradation in performance (note that setting the “Call interface” field in the “Connection Pool” to “OCI 10g/11g” as you did at the start of the tutorial has no affect on the database features held in the repository).

Select the “General” tab from the “SH” properties as before and select the “Oracle 10g R2/11g” value from the “Database” LOV.  Click on the “Features” tab.  You’ll find that both the current values and the defaults have been set to the “ORACLE_10g_R2” values that you found in the configuration file (all four join types are supported).  Press “OK”.  Try the global consistency check as before and the warning will not appear, but this time you’ll be using all the features that your database can support.


Can’t log back into repository tutorial

www.obieefans.com 


Q: I took a break after the first section of the repository tutorial.  I saved it and now I’m trying to log in again.  I presume the repository must be opened offline since the server is down.  But when I try to login as Administrator/ Administrator the login fails?

Yes, the issue of passwords in the tutorials can be misleading.  Elsewhere, such as when logging into the Presentation Services, you use “Administrator” as the password, but here you don’t.  Just leave the password field blank and you should be able to log back in and continue with the tutorial.


cannot start BI server -- control request exception

www.obieefans.com


Q: I’ve installed OBIEE and got it working okay.  Then I started on the repository tutorial and now I can’t bring the server up.  I get a control request exception?

The error message you’re getting is probably:

*  Error 1064: An exception occurred in the service when handling a control request

which is not very informative.  You’ll get this message for a variety of reasons.  I think in your case it’s likely to be because of an incompletely defined repository.  When you attempt to bring up the BI Server it will first read file “NQSConfig.ini” from directory “<oraclebi home>\server\config”.  This configuration file will contain the name of the default repository.  The BI Server will then read the contents of that repository.  But if the repository is corrupted or incomplete then the BI Server will refuse to start and you’ll get the above error message.

As part of the installation tutorial a repository “sh.prd” is provided to test that everything is working correctly.  This is a completely defined repository which is why you could bring the BI Server up in the past.  But in the repository tutorial you create file “sh.rpd” from scratch.  Unless you’ve finished the tutorial, you’ll have a partially defined repository, so the BI Server will refuse to start.

Edit file “NQSConfig.ini” and change the repository name to “paint.rpd” or “samplesales.rpd” (first check that the repository you select is present in “<oraclebi home>\server\repository”).  Then you should be able to bring the BI Server back up.

To confirm that an incompletely defined repository is the cause of the problem look in the BI Server log file, “NQServer.log”, in directory “<oraclebi home>\server\log”.  You’ll probably see something like:

*  [nQSError: 15014] No subject area is available in the repository Star.

The error messages that appear in the log files are sometimes more informative than those that appear on the screen.

Just use the Administration Tool to complete the repository tutorial.  Then switch the repository name in “NQSConfig.ini” back to “sh.prd”.

No row count in Physical Layer after import

www.obieefans.com 


Q: After importing schema metadata and doing an update of all rows the tutorial shows the row counts displayed in the Physical Layer, but I just see the table names?

If you run your mouse over the table names you’ll see the row count appear briefly.  What the tutorial forgot to tell you is that row counts are not displayed by default.  Navigate to “Tools => Options”, select the “General” tab, and check the “Show row count in physical view” check-box.  Press “OK” to exit.  Now the row counts will appear and match the screen values displayed in the tutorial.

“Connection Pool” window doesn’t appear during import

www.obieefans.com 


Q: I’m doing the “create repository” tutorial and when I press the “Import” button the “Connection Pool” window doesn’t appear.  Have I done something wrong?

No you haven’t.  There are two different methods of importing schema metadata.  What the tutorial describes is a “hot-potch” of both, so it’s no wonder you’re confused.

Method 1.  Let’s assume you’ve followed the tutorial instructions to create an “ODBC Data Source” called “SH” that contains connect information for the “SH” schema using a “tnsnames.ora” entry of “ORCL”.  Continuing to follow the tutorial, when you select “Import => from Database” you see “ODBC 3.5” as the default “Connection Type”.  You then select “SH” as the DSN.  What you are doing here is identifying the “ODBC Data Source” that you defined earlier.  When you press “OK” to bring up the “Import” window and select your tables you already have all the information needed to perform the import.  So when you press the “Import” button the import starts and completes without any “Connection Pool” window appearing.  You’ll see the name you gave to the “ODBC Data Source”, “SH”, appear as a node in the “Physical Layer”.  If you expand this node you’ll see that the tables have already been imported (the tutorial is incorrect in saying you have to do something else for the import to proceed).  Just press the “Close” button in the “Import” window and you’re done.  Now right-click on the “Connection Pool” under the “SH” node and select “Properties”.  The “Call Interface” will be set to “ODBC 3.5” (the tutorial is wrong here) and the “Data Source Name” will be set to “SH” or whatever name you used when defining the “ODBC Data Source”.  You don’t actually need to change these values: they worked successfully in allowing you to import the metadata, and they therefore provide an indirect means of connecting to the Oracle database – via the “ODBC Data Source”.  However, changing the “Call Interface” to “OCI 10g/11g” and the “Data Source Name” to the appropriate “tnsnames.ora” entry for your database, say “ORCL”, makes sense as it will effectively bypass the “ODBC Data Source” when you connect to the Oracle database in the future.

Method 2.  But, defining an “ODBC Data Source” is redundant.  The statement in the tutorial “An ODBC data source is needed to import schema information into an Oracle BI repository” is incorrect.  All you have to do to import data is to navigate to “Import => from Database” as before.  Then instead of accepting the default “Connection Type” of “ODBC 3.5”, select a “Connection Type” of “OCI 10g/11g” from the LOV.  Then enter a “TNS Name” of “ORCL”, a “User Name” of “SH”, and the appropriate password.  Select your tables from the “Import” pop-up that appears and press “Import”.  In this case the “Connection Pool” window will appear (this is where the tutorial is getting confused).  Press “OK” and the import will start.  Press “Close”.  The only difference is that the node in the “Physical Layer” will be named “ORCL” instead of “SH”. 

http://localhost/analytics "this page cannot be displayed"

www.obieefans.com 


Q: I’m following the OBIEE tutorial on creating interactive dashboards.  When I enter the address http://localhost/analytics to login I get a “cannot display web page” error?

The web address given in the tutorial is incorrect – it’s missing the port number.  If you try entering

*  http://localhost:9704/analytics

instead, then the address should change to

*  http://localhost:9704/analytics/saw.dll?Dashboard

and the logon screen should appear (the “saw” in “saw.dll” stands for “Siebel Analytics Web”, and you’ll find frequent references to this TLA throughout OBIEE, as in the “SAW protocol” used by the “J2EE Server” and the “Oracle BI Scheduler” to connect to the “Oracle BI Presentation Services”). 

You’ll find that “http://localhost:9704” or, more generally, “http://<machine name>:9704” is used as the web prefix to access the various components of OBIEE from your web browser.  If you enter the address “http://localhost:9704” into your browser you’ll find it brings up the “Oracle Containers for J2EE” (OC4J) home page.  This J2EE application server acts as a bridge between the “HTTP Web Server” and the “Oracle BI Presentation Services” process:

*  You can check that the OC4J installation is working by running the tests under “Quick Check”.

*  The “Application Server Control” link in the top right-hand side brings up the same page as “Systems Management” from the “Oracle Business Intelligence” menu.

*  You’ll find a link to the OC4J documentation at the bottom right.

“Could not connect” during update of row count for Products


www.obieefans.com  
 


Q: I’m trying to update the row count for Products in the OBIEE installation tutorial.  I get a “Could not connect message”, even though I can connect using SQL*Plus and the ODBC data source?

When you enter the Connection Pool window you’ll find that the values imported from the “sh” repository don’t match the values in the tutorial.  You’ll get values of

*  Call Interface: “OCI 8i/9i”
*  Data Source Name: “ORCL”

instead of

*  Call Interface: “OCI 10g”
*  Data Source Name: “orclSH”

The value for the call interface won’t cause the connection to fail, but it’s worth selecting a value appropriate to your database from the list of values.  If you have an entry with a Net Service Name of “orcl” in your “tnsnames.ora” file then you don’t have to change the value of the Data Source Name.  If you do use “orclSH” then make sure there is a corresponding entry with this Net Service Name in the “tnsnames.ora” file.

Now to your question.  Given that you can connect from SQL*Plus and an ODBC client the problem is likely to be the password.  The default password for schema “sh” is “sh”, and this is the password that is imported from the repository.  When your DBA installed the database it’s quite likely he changed the passwords when he unlocked the schemas.  If schema “sh” has a different password you’ll be able to connect from SQL*Plus using the changed password.  You’ll also be able to connect from an ODBC client since the ODBC data source definition doesn’t store the password – unlike the Administration Tool – but asks you for it each time you test a connection.  If you’re assuming that the Administration Tool has picked up the changed password from somewhere else then that could be the problem.  Just change the password in the Connection Pool to the one that worked from SQL*Plus and you should be able to connect and update the row count.

“TNS: could not resolve” error during ODBC configuration and admininstration

www.obieefans.com 


Q: I’m trying to get the installation tutorial to work.  I’ve configured the ODBC data source using the tnsnames.ora example given in the tutorial, substituting my own host name, but I’m getting a “TNS: could not resolve” error when I test the connection from the ODBC driver configuration.  And in the Admin Tool when I try to update the row count for Products I also get the same message?

For a start you don’t need to create an ODBC Data Source to carry out this tutorial.  The only reason for doing so might be to test that the Net Service Name in file “tnsnames.ora” has been set up correctly, but you can do this with less effort, if required, from SQL*Plus.

So let’s start with what you will need for this tutorial.  You will need an entry in file “tnsnames.ora”.  You’ll find the file in directory “\network\admin”.  If the file doesn’t exist in this directory you’ll need to create one using, for example, Notepad.  You’ll also need an entry in the file similar to the following (substituting for your “host” and “service_name” as appropriate – the “service_name” will probably be “orcl”):

bitest =
   (DESCRIPTION =
      (ADDRESS_LIST =
         (ADDRESS = (PROTOCOL = tcp)(HOST = hp)(PORT = 1521))
      )
      (CONNECT_DATA =
         (SERVICE_NAME = orcl)
      )
   )

This entry maps a Net Service Name of “bitest” onto a Database Service Name of “orcl” at the specified address.  You can verify that this connection is working by logging into SQL*Plus and entering

connect sh/<password>@bitest

The example for the Net Service Name given in the tutorial, “orcl SH”, won’t work because it contains an embedded space.  If you enter this value for the Data Source Name in the Connection Pool window of the Administration Tool then you’ll get a

*  ORA-12154: TNS: could not resolve the connect identifier specified

error message.  If you look at the value entered in the tutorial for the Connection Pool Data Source Name it’s “orclSH”, rather than “orcl SH” – so this looks like a case of the “Oracle fat-finger” syndrome.  So if you change the Net Service Name in the “tnsnames.ora” file from “orcl SH” to “orclSH”, then the main part of the tutorial should work (but the Net Service Name doesn’t have to be “orclSH”; any valid name will work as long as it’s the same name in both places).

Now to creating the ODBC Data Source.  The tutorial states that the Data Source Name must be the same as the instance name.  It doesn’t have to be; you can choose any name you like (note the Data Source Name specified in the ODBC Driver Configuration window has nothing to do with the Data Source Name entered in the Connection Pool window in the Administration Tool – the former is just an arbitrary name for the data source; the latter is the value of a Net Service Name from the “tnsnames.ora” file).

The TNS Service Name entered in the ODBC Driver Configuration window must be the same as the Net Service Name you want to test, for example “orclSH”.  Now the value of the TNS Service Name specified in the tutorial, “orcl”, is a Database Service Name.  Using a Database Service Name won’t work, which is probably why you also got an error when testing the connection from the ODBC Driver Configuration window (the tutorial states that the TNS Service Name can be equal to the Service Name, but this is not correct, in general).  However, it’s very common for a “tsnnames.ora” file to contain an entry that maps a Net Service Name of “orcl” onto a Database Service Name of “orcl”:

orcl =
   (DESCRIPTION =
      (ADDRESS_LIST =
         (ADDRESS = (PROTOCOL = tcp)(HOST = hp)(PORT = 1521))
      )
      (CONNECT_DATA =
         (SERVICE_NAME = orcl)
      )
   )

If this is the case then using “orcl” for the TNS Service Name would work, but then you wouldn’t be testing the “orclSH” connection.  You probably don’t have this entry in your “tnsnames.ora” file or else you have a non-default Database Service Name (first verify that you can connect to the database from within SQL*Plus).  If you enter a value for the TNS Service Name equal to that of the Net Service Name from the “tnsnames.ora” file then the connection test should be successful.

OBIEE 11g Installation FAQ

www.obieefans.com 


Intro:

Oracle has many decades of experience in delivering “difficult to install” applications, and in the case of OBI 11g it doesn’t disappoint.  While the RCU Configuration and Installation procedures are straightforward, the “devil is in the prerequisites”.  If your machine has a static IP, then the installation should be problem free, but if your machine uses DHCP – as is quite likely if you’re evaluating the product – then you may well find yourself tossing a few expletives in Oracle’s direction.

Q: Will OBIEE 11g work on Windows XP?

You’ll see warnings about a workstation class machine.  For example in the “Enterprise Manager”, under “Business Intelligence”, “coreapplication” you’ll see:

*  WARNING: The Oracle BI Presentation Services server is running on a workstation class machine (Windows 2000 Workstation, Windows XP Professional, etc.).  Number of concurrent users may be severely limited by the operating system.

but it will work. 

Q: I’ve got a 2 GB laptop.  Can I install OBI 11g?

Well, you can, but the experience will not be a happy one.  Whereas OBI 10g was satisfied with (1.5-2) GB of memory, OBI 11g is a much more demanding mistress when it comes to hardware resources and will not perform with anything that could be called enthusiasm short of (3-4) GB – and even then you’ll do well to synchronize boot times with coffee breaks.

By way of comparison, with a 3 GB machine you can install and configure OBI 11g in an hour and 10 minutes, whereas with an otherwise identical 1.75 GB machine it takes almost four and a half hours.

Q: Can I run OBI 11g with IE 6?

No.  If you try to bring up Answers & Dashboards using IE 6, you’ll get a message saying it’s not supported and access will be denied.


Q: The “Quick Installation Guide” says to follow the instructions for DHCP hosts in the “Oracle Fusion Middleware Installation Planning Guide”, but that contains nothing on DHCP?

As you may well have noticed, the OBI 11g documentation has more than its fair share of broken links and incorrect references.  In the OBI 11g documentation references to DHCP occur in both the “Quick Installation Guide” and the “Installation Guide”.  In each guide section “4.1 Before You Install Oracle Business Intelligence” states:

*  If you plan to install Oracle Business Intelligence on a computer that uses Dynamic Host Configuration Protocol (DHCP), you must perform additional configuration tasks before you install Oracle Business Intelligence.  See "Installing on DHCP Hosts" in Oracle Fusion Middleware Installation Planning Guide.

Unfortunately the “Oracle Fusion Middleware Installation Planning Guide” does not contain any references to DHCP.  Instead try:

*  Oracle Fusion Middleware Quick Installation Guide for Oracle Portal, Forms, Reports and Discoverer 11g Release 1 (11.1.1)

section “3.6 Installing on DHCP Host” at:

*  http://download.oracle.com/docs/cd/E12839_01/install.1111/e12003.pdf

or

*  Oracle Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (32-Bit)

section “2.4.1 Installing Oracle Database on DHCP Computers” at:

*  http://download.oracle.com/docs/cd/B19306_01/install.102/b14316/ reqs.htm#BABGCEAI 

Q: The installation goes okay until near the end when cc and ps fail to configure correctly?

You will probably be able to bring up the Admin Console and the Enterprise Manager, but not Answers & Dashboards or the Admin tool (note, just because all the components are marked as “up” in the Enterprise Manager doesn’t mean that OBI 11g will function correctly).

You’re probably using DHCP and have not got the correct setting in the hosts file (see the following entry for a workaround).

Q: I am trying to install OBI 11g and I can’t get beyond the Admin screen?

If the installer stops at the “Administrator User Details Screen” try installing a loopback adapter or adjusting the settings in your hosts file (see the previous entry for a workaround).

Q: When I run “Start BI Services” from the “Oracle Business Intelligence” menu, I get “Error 500--Internal Server Error” when the web browser opens?

Let’s call it a feature.  If you get the message “Error 500--Internal Server Error – From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1: – 10.5.1 500 Internal Server Error – The server encountered an unexpected condition which prevented it from fulfilling the request”, then just refresh the browser (F5), and the logon screen should appear.

Q: Answers keeps saying Unable to Sign In, but I’ve checked the logon and password?

Probably failed installation. try re-installing.



Where to copy sh.rpd in OBIEE installation folder??

Many OBIEE enthusiasts found this problem.

Where to paste the downloaded SH.zip contents.
If you are following Oracle by Example, then you must download the sh.zip folder and copy the contents in the OBIEE installation folder as told below.

Q: The installing OBIEE tutorial says to copy file sh.rpd from <InstallDrive>\SetupFiles, but there’s no SetupFiles directory?

The instructions here can be a little confusing.  Oracle first states that

*  The installation creates a sample repository file (paint.rpd) in <InstallDrive>\OracleBI\server\Repository.

and then it goes on to say

*  Copy sh.rpd from \SetupFiles to \OracleBI\server\Repository.

In the prerequisites for the tutorial Oracle states that you must have access to schema SH (which you will have by default in the later database versions).  The small print associated with this section describes how to download file “SH.zip”, and in doing so gives the impression that the download is only required if you don’t have the SH schema installed.  However, “SH.zip” is required irrespective of whether you have access to the SH schema.  You’ll find the file you’re looking for, “sh.prd”, in directory “SH/sh” after you unzip the file. 

Q: I’ve downloaded “SH.zip” to the SetupFiles folder, but I can’t find “sh.rpd” / “SH_PartTwo.rpd”?

A number of the tutorials require, as a prerequisite, that you carry out the following task:

*  Create a directory named SetupFiles under <InstallDrive>. Download and extract the SH.zip file from here to this directory named SetupFiles.

Having done the download for one tutorial, then – like any reasonably minded person – when you see the same words in the next tutorial you are apt to assume that you have already completed the task, and conclude that you don’t have to do it again.  Not so!  Different tutorials use different versions of “SH.zip”.  For example, tutorials one and three use the same version, one that contains a repository called “sh.rpd”; tutorial two uses a version that contains a repository called “SH_PartTwo.rpd”; tutorial eight also uses a repository called “sh.rpd”, but it's a different repository from the one used by tutorials one and three:

*  Tutorials 1&3:

 http://www.oracle.com/technology/ ...... obe/obe_bi/bi_ee_1013/saw/files/SH.zip 10

*  Tutorial 2:

 http://www.oracle.com/technology/ ...... obe/obe_bi/bi_ee_1013/bi_admin/files/SH.zip

*  Tutorial 8:

 http://www.oracle.com/technology/ ...... obe/obe_bi/bi_ee_1013/mude/files/SH.zip 

OBIEE Interview questions and FAQ

www.obieefans.com 



Here is the list of FAQ's and Interview Questions OBIEE/SIEBEL Analytics


www.obieefans.com

These questions are related to what previously known as Siebel Analytics is now known as OBIEE i.e Oracle Business Intelligence Enterprise Edition.

” Define repository in terms of Siebel Analytics
o Repository stores the Meta data information. Siebel repository is a file system ,extension of the repository file. rpd.
o META DATA REPOSITORY
o With Siebel Analytics Server, all the rules needed for security, data modeling, aggregate navigation, caching, and connectivity is stored in metadata repositories.
o Each metadata repository can store multiple business models. Siebel Analytics Server can access multiple repositories



” What is the end to end life cycle of Siebel Analytics?
o Siebel Analytics life cycle
1. Gather Business Requirements
2. Identify source systems
3. Design ETL to load to a DW if source data doesn’t exist.
4. Build a repository
5. Build 
dashboard or use answers for reporting.
6. Define security (LDAP or External table…)
7. Based on performance, decide on aggregations and/or caching mechanism.
8. Testing and QA.



” What were you schemas? How does Siebel Architecture works? Explain the three layers. How do you import sources?
o There are five parts of Siebel Architecture.
1. Clients
2. Siebel analytics Web Server
3. Siebel analytics server
4. Siebel analytics scheduler
5. data sorces
o Metadata that represents the analytical Model Is created using the siebel Analytics Administration tool.
o Repository divided into three layer
1. Physical – Represents the data Sources
2. Business – models the Data sources into Facts And Dimension
3. Presentation – Specifies the users view of the model;rendered in Siebel answer



” If you have 3 facts and 4 dimension and you need to join would you recommend joining fact with fact? If no than what is the option? Why you won’t join fact to fact?
o In the BMM layer, create one logical table (fact) and add the 3 fact table as logical table source

” What is connection pool and how many connection pools did you have in your last project?
o connection pool is needed for every physical database.
o It contains information about the connection to the database, not the database itself.
o Can use either shared user accounts or can use pass-through accounts -Use: USER and PASSWORD for pass through .
o We can have multiple connection pools for each group to avoid waiting



” Purpose of Alias Tables
o An Alias table (Alias) is a physical table with the type of Alias. It is a reference to a logical table source, and inherits all its column definitions and some properties from the logical table source. A logical table source shows how the logical objects are mapped to the physical layer and can be mapped to physical tables, stored procedures, and select statements. An alias table can be a reference to any of these logical table source types.
o Alias Tables can be an important part of designing a physical layer. The following is a list of the main reasons to create an alias table:
” To reuse an existing table more than once in your physical layer (without having to import it several times)
” To set up multiple alias tables, each with different keys, names, or joins
o To help you design sophisticated star or snowflake structures in the business model layer. Alias tables are critical in the process of converting ER Schemas to Dimensional Schemas.



” How do you define the relationship between facts and dimensions in BMM layer?
o Using complex join ,we can define relationship between facts and dimentions in BMM layer.


” What is time series wizard? When and how do you use it?
o We can do comparison for certain measures ( revenue.,sales etc.. ) for current year vs previous year, we can do for month or week and day also
o Identify the time periods need to be compared and then period table keys to the previous time period.
o The period table needs to contain a column that will contain “Year Ago” information.
o The fact tables needs to have year ago totals.
o To use the “Time series wizard”. After creating your business model right click the business model and click on “Time Series Wizard”.
o The Time Series Wizard prompts you to create names for the comparison measures that it adds to the business model.
o The Time Series Wizard prompts you to select the period table used for the comparison measures
o Select the column in the period table that provides the key to the comparison period. This column would be the column containing “Year Ago” information in the period table.
o Select the measures you want to compare and then Select the calculations you want to generate. For ex: Measure: Total Dollars and calculations are Change and Percent change.
o Once the Time series wizard is run the output will be:
a) Aliases for the fact tables (in the physical layer)
b) Joins between period table and alias fact tables
c) Comparison measures
d) Logical table sources
o In the General tab of the Logical table source etc you can find “Generated by Time Series Wizard” in the description section
o Then you can add these comparision measures to the presentation layer for your reports.
o Ex: Total sales of current qtr vs previous qtr vs same qtr year ago



” Did you create any new logical column in BMM layer, how?
o Yes. We can create new logical column in BMM layer.
o Example: Right click on fact table -new lgical column-give name for new logical column like Total cost.
o Now in fact table source,we have one option column mapping, in that we can do all calculation for that new column.



” Can you use physical join in BMM layer?
o yes we can use physical join in BMM layer.when there is SCD type 2 we need complex join in BMM layer.


” Can you use outer join in BMM layer?
o yes we can.When we are doing complex join in BMM layer ,there is one option type,outer join is there.


” What are other ways of improving summary query reports other than Aggregate Navigation and Cache Management
” Indexes
” Join algorithm
” Mat/view query rewrite
” Web proper report design its optimal by making sure that it is not getting any addition column or rows


” What is level-base matrics?
o Leval-base matrics means, having a measure pinned at a certain level of the dimension. For Example, if you have a measure called “Dollars”, you can create a “Level Based Measure” called “Yearly Dollars” which (you guessed it) is Dollars for a Year. This measure will always return the value for the year even if you drill down to a lower level like quarter, month… etc. To create a level based measure, create a new logical column based on the original measure (like Dollars in the example above). Drag and drop the new logical column to the appropriate level in the Dimension hierarchy (in the above example you will drag and drop it to Year in Time Dim
o A LBM is a metric that is defined for a specific level or intersection of levels.
o Monthly Total Sales or Quarterly Sales are the examples.
o You can compare monthly sales with quarterly sales. You can compare customer orders this quarter to orders this year



” What is logging level?Where can you set logging levels?
o You can enable logging level for individual users; you cannot configure a logging level for a group.
o Set the logging level based on the amount of logging you want to do. In normal operations, logging is generally disabled (the logging level is set to 0). If you decide to enable logging, choose a logging
o level of 1 or 2. These two levels are designed for use by Siebel Analytics Server administrators.
o Set Logging Level
1. In the Administration Tool, select Manage > Security.
2. The Security Manager dialog box appears.
3. Double-click the user.s user ID.
4. The User dialog box appears.
5. Set the logging level by clicking the Up or Down arrows next to the Logging Level field



” What is variable in sieble?
o You can use variables in a repository to streamline administrative tasks and modify metadata content dynamically to adjust to a chainging data environment.The Administration Tool includes a Variable Manager for defining variables


” What is system variable and non system variable?
o System variables
o System variables are session variables that the Siebel Analytics Server and Siebel Analytics Web use for specific purposes. System variables have reserved names, which cannot be used for other kinds of variables (such as static or dynamic repository variables, or for nonsystem session variables).
o When using these variables in the Web,preface their names with NQ_SESSION. For example, to filter a column on the value of the variable LOGLEVEL set the filter to the Variable NQ_SESSION.LOGLEVEL.
o Nonsystem variables.
o A common use for nonsystem session variables is setting user filters. For example, you could define a nonsystem variable called SalesRegion that would be initialized to the name of the user.s sales region. You could then set a security filter for all members of a group that would allow them to see only data pertinent to their region.
o When using these variables in the Web, preface their names with NQ_SESSION. For example, to filter a column on the value of the variable SalesRegion set the filter to the Variable NQ_SESSION.SalesRegion.



” What are different types of variables? Explain each.
o There are two classes of variables:
1. Repository variables
2. Session variables.
Repository variables.
A repository variable has a single value at any point in time. There are two types of repository variables:
static : This value persists, and does not change until a Siebel Analytics Server administrator decides to change it.
dynamic:The values are refreshed by data returned from queries. When defining a dynamic repository variable, you will create an initialization block or use a preexisting one that contains a SQL query. You will also set up a schedule that the Siebel Analytics Server will follow to execute the query and periodically refresh the value of the variable.
Session Variables
Session variables are created and assigned a value when each user logs on. There are two types of session variables:
1.system
2.nonsystem.



” What are the cache management? Name all of them and their uses. For Event polling table do u need the table in your physical layer?
o Monitoring and managing the cashe is cache management.There are three ways to do that.
o Disable caching for the system.(INI NQ config file), Cashe persistence time for specified physical tables and Setting event polling table.
o Disable caching for the system.(INI NQ config file :
You can disable caching for the whole system by setting the ENABLE parameter to NO in the NQSConfig.INI file and restarting the Siebel Analytics Server. Disabling caching stops all new cache entries and stops any new queries from using the existing cache. Disabling caching allows you to enable it at a later time without losing any entries already stored in the cache.
o Cashe persistence time for specified physical tables :
You can specify a cachable attribute for each physical table; that is, if queries involving the specified table can be added to the cache to answer future queries. To enable caching for a particular physical table, select the table in the Physical layer of the Administration Tool and select the option Make table cachable in the General tab of the Physical Table properties dialog box. You can also use the Cache Persistence Time settings to specify how long the entries for this table should persist in the query cache. This is useful for OLTP data sources and other data sources that are updated frequently, potentially down to every few seconds.
o Setting event polling table :
Siebel Analytics Server event polling tables store information about updates in the underlying databases. An application (such as an application that loads data into a data mart) could be configured to add rows to an event polling table each time a database table is updated. The Analytics server polls this table at set intervals and invalidates any cache entries corresponding to the updated tables.
o For event polling table ,It is a standalone table and doesn’t require to be joined with other tables in the physical layer



” What is Authentication? How many types of authentication.
o Authentication is the process by which a system verifies, through the use of a user ID and password, that a user has the necessary permissions and authorizations to log in and access data. The Siebel Analytics Server authenticates each connection request it receives.
” Operaing system autentication
” External table authentication
” Database authentication
” LDAP authentication



” What is object level security?
o There are two types of object level security: Repository level and Web level
o Repository level : In presention layar we can set Repository level security by giving permission or deny permission to users/groups to see particular table or column.
o web level:thisprovides security for objects stored in the siebel anlytics web catlog,such as 
dashboards,dashboards pages,folder,and reportsyou can only view the objects for which you are authorized. For example,a mid level manager may not be granted access to a dashboard containing summary information for an entire department.


” What is data level security?
o This controls the type an amount of data that you can see in a report.When multiple users run the same report the results that are returned to each depend on their access rights and roles in the organization.For example a sales vice president sees results for alll regions, while a sales representative for a particular region sees onlu datafor that region.


” What is the difference between Data Level Security and Object Level Security?
o Data level security controls the type and amount of data that you can see in a reports.Objectlevel security provides security for objects stored in the siebel analytics web catlog, like dashboards,dashboards pages,folder,and reports.


” How do you implement security using External Tables and LDAP?
o Instead of storing user IDs and passwords in a Siebel Analytics Server repository, you can maintain lists of users and their passwords in an external database table and use this table for authentication purposes. The external database table contains user IDs and passwords, and could contain other information, including group membership and display names used for Siebel Analytics Web users. The table could also contain the names of specific database catalogs or schemas to use for each user when querying data
o Instead of storing user IDs and passwords in a Siebel Analytics Server repository, you can have the Siebel Analytics Server pass the user ID and password entered by the user to an LDAP(Lightweight Directory Access Protocol ) server for authentication. The server uses clear text passwords in LDAP authentication. Make sure your LDAP servers are set up to allow this.



” If you have 2 fact and you want to do report on one with quarter level and the other with month level how do you do that with just one time dimension?
o Using levelbase matrics.

” Did you work on a stand alone Siebel system or was it integrated to other platforms?
o Deploying the Siebel analytics platform without other Siebel applications is called Siebel analytics Stand -Alone .If your deployment includes other siebel Analytics Application it called integrated analytics -You can say Stand-Alone siebel analytics


” How to sort columns in rpd and web?
o Sorting on web column, sort in the rpd its sort order column


” If you want to create new logical column where will you create (in repository or dashboard) why?
o I will create new logical column in repository.because if it is in repository,you can use for any report.If you create new logical column in dashboard then it is going to affect on those reports ,which are on that dashboard.you can not use that new logical column for other dashboard(or request)


” What is complex join, and where it is used?
o we can join dimention table and fact table in BMM layer using complex join.when there is SCD type 2 we have to use complex join in Bmm layer.


” If you have dimension table like customer, item, time and fact table like sale and if you want to find out how often a customer comes to store and buys a particular item, what will you do?
o write a query as “SELECT customer_name, item_name, sale_date, sum(qty) FROM customer_dim a, item_dim b, time_dim c, sale_fact d WHERE d.cust_key = a.cust_key AND d.item_key = b.item_key AND d.time_key = c.time_key GROUP BY customer_name, item_name, sale_date”

” You worked on standalone or integrated system?
o Standalone.

” If you want to limit the users by the certain region to access only certain data, what would you do?
o using data level security.
o Siebel Analytics Administrator: go to Manage -> Security in left hand pane u will find the user, groups, LDAP server, Hierarchy
What you can do is select the user and right click and go to properties, you will find two tabs named as users and logon, go to user tab and click at permission button in front of user name you have selected as soon as u click at permission you will get a new window with user group permission having three tabs named as general ,query limits and filter and you can specify your condition at filter tab, in which you can select presentation table ,presentation columns ,logical table and logical columns where you can apply the condition according to your requirement for the selected user or groups.


” If there are 100 users accessing data, and you want to know the logging details of all the users, where can you find that?
o To set a user.s logging level
1. In the Administration Tool, select Manage > Security.
The Security Manager dialog box appears.
2. Double-click the user.s user ID. The User dialog box appears.
3. Set the logging level by clicking the Up or Down arrows next to the Logging Level field


” How do implement event polling table?
o Siebel Analytics Server event polling tables store information about updates in the underlying databases. An application (such as an application that loads data into a data mart) could be configured to add rows to an event polling table each time a database table is updated. The Analytics server polls this table at set intervals and invalidates any cache entries corresponding to the updated tables.
” Can you migrate the presentation layer only to different server
o No we can’t do only presentation layer. And ask him for more information and use one of the above answers
o Create a ODBC connection in the different serve and access the layer.
o Copy the Rpd and migrate it to other server


” Define pipeline. Did you use it in your projects?
o Yes, pipelines are the stages in a particular transaction. assessment, finance etc.

” How do you create filter on repository?
o Where condition on content tab.

” How do you work in a multi user environment? What are the steps?
o Create a shared directory on the network for Multi-user Development (MUD).
o Open the rpd to use in MUD. From Tools->Options, setup the MUD directory to point to the above directory.
o Define projects within the rpd to allow multiple users to develop within their subject area or Facts.
o Save and move the rpd to the shared directory setup in point 1.
o When users work in the MUD mode, they open the admin tool and start with
o MUD ->Checkout to checkout the project they need to work on (not use the File open as you would usually do).
o After completely the development, user checkin the changes back to the network and merge the changes.


” Where are passwords for userid? Ldap,external table authentication stored respectively?
o passwords for userid are in siebel analytics server repository Ldap authentication in Ldap server external database in a table in external database

” Can you bypass siebel analytics server security ?if so how?
o yes you can by-pass by setting authententication type in NQSCONFIG file in the security section as:authentication_type=bypass_nqs.instanceconfig.xml and nqsconfig.ini are the 2 places
” Where can you add new groups and set permissions?
o you can add groups by going to manage>security>add new groups> You can give permissions to a group for query limitation and filter conditions.


” what are the things you can do in the BMM layer?
o Aggrigation navigation,level base matrics,time series wizard,create new logical column,comlex join.

” what is Ragged hierarchy? and how do u manage it
o Ragged Hierarchy is one of the different kinds of hierarchy.
o A hierarchy in which each level has a consistent meaning, but the branches have inconsistent depths because at least one member attribute in a branch level is unpopulated. A ragged hierarchy can represent a geographic hierarchy in which the meaning of each level such as city or country is used consistently, but the depth of the hierarchy varies.
o For example, a geographic hierarchy that has Continent, Country, Province/State, and City levels defined. One branch has North America as the Continent, United States as the Country, California as the Province or State, and San Francisco as the City. However, the hierarchy becomes ragged when one member does not have an entry at all of the levels. For example, another branch has Europe as the Continent, Greece as the Country, and Athens as the City, but has no entry for the Province or State level because this level is not applicable to Greece for the business model in this example. In this example, the Greece and United States branches descend to different depths, creating a ragged hierarchy.


” What is the difference between Single Logical Table Source and Multiple Logical Table Sources?
o If a logical table in BMM layer has only one Table as the source table then it is Single LTS.
o If the logical table in BMM layer has more than one table as the sources to it then it is called Multiple LTS.
o Ex: Usually Fact table has Multiple LTS’, for which sources will be coming from different Physical tables.
” Can you let me know how many aggregate tables you have in your project? On what basis have you created them?
o As per resume justification document


” How do you bring/relate the aggregate tables into the Siebel analytics Logical layer?
o One way of bringing the Aggregate Tables into the BMM layer is by bringing them as Logical Table sources for the corresponding Fact table.
o This is done by dragging and dropping the aggregate table into the corresponding fact table. After doing that establish the column mappings and the set the aggregation levels.


” How do you know which report is hitting which table, either the fact table or the aggregate table?
o After running the report, go to “Administration” tab and go to click on “Manage Sessions”. There you can find the queries that are run and in the “View Log” option in the Session Management you can find which report is hitting which table.

” Suppose I have report which is running for about 3 minutes typically. What is the first step you take to improve the performance of the query?
o Find the sql query of the report in Admin->manage Session-> run the sql query on toad ->read the explain plan output ->modify the SQL based on the explain plan output

” Suppose you have a report which has the option of running on aggregate table. How does the tool know to hit the Aggregate table and for that what the steps you follow to configure them?
o Explain the process of Aggregate navigation
” Have you heard of Implicit Facts? If, so what are they?
o An implicit fact column is a column that will be added to a query when it contains columns from two or more dimension tables and no measures. You will not see the column in the results. It is used to specify a default join path between dimension tables when there are several possible alternatives.
o For example, there might be many star schemas in the database that have the Campaign dimension and the Customer dimension, such as the following stars:
” Campaign History star. Stores customers targeted in campaign.
” Campaign Response star. Stores customer responses to a campaign.
” Order star. Stores customers who placed orders as a result of a campaign.
In this example, because Campaign and Customer information might appear in many segmentation catalogs, users selecting to count customers from the targeted campaigns catalog would be expecting to count customers that have been targeted in specific campaigns.
” To make sure that the join relationship between Customers and Campaigns is through the campaign history fact table, a campaign history implicit fact needs to be specified in Campaign History segmentation catalog. The following guidelines should be followed in creating
” segmentation catalogs:
” Each segmentation catalog should be created so that all columns come from only one physical star.
” Because the Marketing module user interface has special features that allow users to specify their aggregations, level-based measures typically should not be exposed to segmentation users in a segmentation catalog.


” What is aggregate navigation? How do you configure the Aggregate tables in Siebel Analytics?
o Aggregate tables store precomputed results, which are measures that have been aggregated (typically summed) over a set of dimensional attributes. Using aggregate tables is a very popular technique for speeding up query response times in decision support systems.
o If you are writing SQL queries or using a tool that only understands what physical tables exist (and not their meaning), taking advantage of aggregate tables and putting them to good use becomes more difficult as the number of aggregate tables increases. The aggregate navigation capability of the Siebel Analytics Server, however, allows queries to use the information stored in aggregate tables automatically, without query authors or query tools having to specify aggregate tables in their queries. The Siebel Analytics Server allows you to concentrate on asking the right business question; the server decides which tables provide the fastest answers.


” (Assume you are in BMM layer) We have 4 dimension tables, in that, 2 tables need to have hierarchy, then in such a case is it mandatory to create hierarchies for all the dimension tables?
o No, its not mandatory to define hierarchies to other Dimension tables.

” Can you have multiple data sources in Siebel Analytics?
o Yes.

” How do you deal with case statement and expressions in siebel analytics?
o use expression builder to create case when…then.. end statement

” Do you know about Initialization Blocks? Can you give me an example where you used them?
o Init blocks are used for instantiating a session when a user logs in.
o To create dynamic variable you have to create IB to write sql statement.


” what is query repository tool?
o It is utility of Seibel/OBIEE Admin tool
o allows you to examine the repository metadata tool
o for example: search for objects based on name,type.
o Examine relationship between metadata objects like which column in the presentation layer maps to which table in physical layer


” what is JDK and why do we need it?
o Java Development Kit (JDK), A software package that contains the minimal set of tools needed to write, compile, debug, and run Java applets.

” Oracle doesn’t recommend Opaque Views because of performance considerations, so why/when do we use them?
o an opaque view is a physical layer table that consists of select statement. an opaque view should be used only if there is no other solution.

” Can you migrate the presentation layer to a different server.
o No we have to migrate the whole web & rpd files

” How do you identify what are the dimension tables and how do you decide them during the Business/Data modeling?
o Dimension tables contain descriptions that data analysts use as they query the database. For example, the Store table contains store names and addresses; the Product table contains product packaging information; and the Period table contains month, quarter, and year values. Every table contains a primary key that consists of one or more columns; each row in a table is uniquely identified by its primary-key value or values

” Why do we have multiple LTS in BMM layer?What is the purpose?
o to improve the performance and query response time.

” what is the full form of rpd?
o there is no full form for rpd as such, it is just a repository file (Rapidfile Database)

” how do i disable cache for only 2 particular tables?
o in the physical layer, right click on the table there we will have the option which says cacheable

” How do you split a table in the rpd given the condition. ( the condition given was Broker and customer in the same table) Split Broker and customer.
o we need to make an alias table in the physical layer.

” What type of protocol did you use in SAS?
o TCP/IP