Tuesday, July 24, 2007

Reply to a comment - Hyperion licenses

This is a post in response to a comment to a post (Hyperion products now available for download)
I have reproduced the entire comment here for convenience:
Anonymous said...

I noticed that on Oracle technology network, a developer can download the software with Development License.
Is there a place where a developer can download the Hyperion products with "Development License" to evaluate/learn the products or even prepare for certification?
The reason I am asking is that the 30-day trial period on e-delivery website is barely enough for something as complex as Hyperion products. It takes a while to install/configure them!! Most existing Hyperion customers have only a few products licensed and in the past, Hyperion made it very difficult for developers of those customers to get a trial license (you had to go through sales team to get trial software who wouldn't help you unless they saw immediate money). If there was a development license for Hyperion products, I am sure many of the developers of existing customers would try out some of the remaining products that they don't currently have license for and recommend them to the business users. It would be fine if the development license terms restricted use by the individual developer for non-production use Only (there are paid Hyperion developer tools such as Dashboard Architect, studio, etc. that must still be paid for if used in production environment and any development done with Development License can not be installed in production unless a paid license has also been acquired).

Note: This individual developer "development license" would be different from the "paid development environment" license which is where the development team does the development before installing it to the "paid production environment".
(hopefully it is not too confusing)
The short answer to this question is that once software is available from OTN for download, there **should** not be any license requirement.

Oracle software - all of it - comes without any timed licenses, or trial licenses that lock you out of the product or restrict you to limited functionality after x number of days. If you take the case of Siebel Business Analytics, which became Oracle Business Intelligence Suite Enterprise Edition (with some products like Oracle BI Publisher joining this suite, which in turn came from the Oracle Applications side of the company and went by the very exciting name of XML Publisher), it also had a license key that was required to install and use the different products therein, like Answers, Dashboards, Disconnected, etc... It took some time to take out the license key requirements from the product, and when Oracle released Oracle BI EE (aka code name of 'Maui'), it had no license key restrictions. BI EE does not have a license key, nor will (due very soon), or other releases.

I totally agree that a 30 day license is probably not enough to install, use, and learn the product family - I would prefer 90 days, and Hyperion does have many products that do not exactly fall in the 'Notepad' variety of easy-to-install-and-learn products (Planning, budgeting, financial performance management, etc...).

At this point I don't really have an answer - what you could try is to download and install the products with the 30 day license, and after the 30 day period try and obtain a new 30 day license file and see if that works.

Note that please do not take this as 'legal' advice from Oracle. This is my personal 2 cents (or चार आने char anne in my lingo).

Friday, July 20, 2007

Petrol and OLAP

According to this NYT article (Big Rise Seen in Demand for Energy - New York Times), "It started with a simple question by Samuel W. Bodman, the energy secretary: What does the future hold for supplies of oil and natural gas?".
In order to answer this question, the National Petroleum Council developed an analytic data warehouse using Oracle relational and OLAP technology. Implemented by Vlamis Consulting, the NPC report states:
The data warehouse was designed to be the main analytical tool for the Task Groups, accepting all data collected from the survey questionnaire and other data sources. As the survey data were multi-dimensional, Oracle OLAP database technology was used and the collection was organized using 7 dimensions:
1. Time (year)
2. Geography (country or geographic region)
3. Energy type (e.g., Oil, Gas, Coal, Nuclear, Renewable)
4. Energy sector (e.g., Commercial, Residential)
5. Case type (e.g., Business as usual, Alternative energy policy)
6. Units (applicable unit of measure)
7. Source (e.g., Public, Proprietary)
The statistics contained in the report posed interesting technical challenges, including non-additive data, skip-level hierarchies, non-standard aggregation rules, and more - all of which Oracle OLAP is designed to manage. Discoverer OLAP was used to analyze the data.
You can find more on the Vlamis blog here (Vlamis Software » Blog Archive » Vlamis Used Oracle OLAP For National Petroleum Council Study)

Thanks to Marty Gubar from OLAP Product Management for passing this on.

Wednesday, July 18, 2007

How to limit the disk cache used by the BI Server

This is from a question asked on our internal mailing lists... (yes, I cannot seem to find the time to write new, original posts)

"Is there any way to limit the amount of disk space taken up by data stored on the BI Server for aggregations, calcuations, etc?"

The shor answer is "yes".
NQSConfig.ini (under C:\OracleBI\server\Config) has a section named [ CACHE ] for this purpose.

It reads something like this:

# Query Result Cache Section


// A comma separated list of
// e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
DATA_STORAGE_PATHS = "C:\OracleBIData\cache" 500 MB;
MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size

For example, this is how my cache folder looks like:

And to get more information on what these settings mean and do, you can see pages 229-240 of the "Oracle® Business Intelligence Server Administration Guide, Version, December 2006, (Part No B31770-01)" for a detailed explanation of what the different settings mean, and a brief exposition of the caching strategies.

Thanks to Krishnan Viswanathan from BI product management for providing the answer (yes... I am rendering credit unto others - wouldn't be nice on my part to steal and scoot -
चोरी और सीना जोरी "chori aur seena jori" is not something I subscribe to).

Hyperion products now available for download

Firstly, if you want to try out the Hyperion BI products, you can do so by going to http://edelivery.oracle.com/ and select "Hyperion Performance Management and BI" from the "Product Pack" dropdown. This then results in the following products that are available to download:
  1. Hyperion Performance Management and BI (9.3.x)
  2. Hyperion Performance Management and BI (9.2.x)
  3. Hyperion Enterprise Release (6.4.1)
  4. Hyperion Business Modeling (4.0.5)
  5. Hyperion Application Link (9.x) Media Pack

You can download individual product components where available, a license key, and also the product documentation - which is a good place to begin.

Secondly, to view a brief description of what these products are, at least at the 'platform' level, go to this page on OTN: Oracle Business Intelligence Platform Home. You shall find the following four Hyperion products listed as part of "Oracle Business Intelligence Enterprise Edition Plus"
  1. Interactive Reporting : Intuitive and highly interactive ad-hoc reporting
  2. SQR Production Reporting : High volume, presentation-quality formatted report generation
  3. Hyperion Financial Reporting : Formatted, book-quality financial and management reporting
  4. Hyperion Web Analysis : Web-based online analytical processing (OLAP) analysis, presentation, and reporting
Thirdly, the Hyperion products are now also listed on the Oracle price-list. The BI Suite EE has been renamed to BI Suite EE Plus (yeah... three cheers for branding).

Wow... three factoids in one post. I must be on a roll here...

Monday, July 16, 2007

What no GUI? How to Maintain Analytic Workspaces using Scripts

This is the second part of a series on a non-GUI based approach to building and maintaining analytic workspaces. Some people might struggle with the requirement to provide a script to rebuild and analytic workspace when Oracle provides a simple GUI tool called Analytic Workspace Manager. Unless of course you are a DBA.

But when it comes to day-to-day data loading scripting is really the only way forward. Fortunately, Analytic Workspace Manager can take the strain out of creating these scripts, however, once generated they are extremely easy to modify and move around from server to server.

The purpose of this post is to provide an overview of these maintenance scripts. There are three basic categories of scripts that need to be run during a maintenance cycle:
  • Dimension maintenance
  • Cube loading
  • Aggregation

Options to Control Dimension Maintenance
Dimension scripts are relatively simple and the likely modifications relate to options to setting all current attributes to NA and removing dimension members no longer present in the source table.

How to identify the dimension being processed? The XML file has a tag call XMLIDref that identifies the dimension to be processed. As shown here:


In a maintenance script you either maintain each dimension using individual scripts or you can insert multiple lines of XMLIDref identifiers.

Why and when do you need reset attributes? This is only necessary when you need to set an attribute to NA to remove its current display label and the next planned load will not replace it label with another one. Otherwise, pre-existing entries for attribute labels are automatically over written during a load. The tag to control this process is:

CleanAttrs="false" or CleanAttrs="true"

Why and when do you need remove dimension members? There may be cases where you actually want to remove a dimension member from the AW. Some people might see this as a way to manage slowly changing dimensions within OLAP, but there is a very nasty side effect of using this feature. The process of deleting it removes all the data related to that member across all cubes to which it contributed. Therefore, you should think very carefully before enabling this tag. Once the data is gone the data is gone, unless you have a backup or sufficient flashback space enabled (which from an AW perspective is highly unlikely). The tag to control this process is:

CleanDim="false" or CleanDim="true"

Option to Control Cube Loading
There are four basic tags used to support loading of data within a cube. The first identifies, XMLIDref, the cube to be processed. It is not possible to maintain a single measure within a cube as processing is done at the cube level (although if you are using a view as the source into your cube mapping you could modify the view to only provide values for the specific measure and return null for all other measures). The XMLIDRef appears as follows, where SALES is the name of the cube to be processed.


When you load data into a cube you have the option to remove all existing values from the cube. I have to admit the documentation is not very helpful regarding this tag:

Indicates whether the analytic workspace deletes existing measures before it loads new measures. This method returns true if the analytic workspace deletes existing before it loads new measures or false otherwise.

I can only assume this will set a variable to NA prior to refreshing data within a cube. Obviously the process of removing all existing data will take some time and this needs to be considered when trying to determine how long the maintenance job will take to run.


The last two tags work in exactly the same way as outlined in the dimension maintenance section. During a cube build it is possible to maintain the base dimensions within the same job. In this case the dimensions are processed first to ensure all the required members are present before the cube is processed.


My personal preference is to do the dimension maintenance first and then run a separate job to load and aggregate any cubes. This helps reduce the amount of temp space required during a build as you can at least commit all the dimension changes to disk before loading data into the cube.

Options to Control Aggregation
The tag TrackStatus, which can have the value of true or false, determines if a load should aggregate only incoming values. If this is set to false, the whole cube is aggregated once the data load has completed. Obviously the aim should be to set this to true to limit the amount of work performed during the aggregation process. During data loads into a partitioned cube you can limit processing to just two partitions – data driven target partition (for example a specific month) and the default partition (quarters and years if partitioned by month) by using this feature, which will make data loading a very fast process.

TrackStatus=”true” or TrackStatus=”false”

Options to Control Execution
The tag ID is just used to mark the job. It must be unique within a group of jobs submitted at the same time. But, apart from that, it has no real significance and there is no link though to data recorded within XML_LOAD_LOG. The tag should look something like this:


The tag AWName identifies the analytic workspace that will be attached as part of this job. For example below the aw called SH_AW in the schema SH_OLAP will be used:


The tag BuildType is a very important tag as it controls how the build is managed. It has two modes:

  • Execute – this forces the calling process, your SQL program, to wait until the procedure has completed before regaining control.
  • Background - this allows the calling process, your SQL program, to continue processing while the job continues in the background. If you have multiple processors (see section on MaxJobQueues) this is the method you must use to allow the controlling job to spawn multiple jobs to get the work done.

Interestingly, the main problem with using this tag within in a script is no errors are returned even if the build fails. The only message that gets returned is “Completed Successfully”. Therefore, to determine the status of a job you need to monitor XML_LOAD_LOG to check for any reported errors.


If you have a server with multiple CPUs you can run parallel execution jobs by setting the MaxJobQueues tag.


Controls the number of parallel jobs used to execute the aggregation step. The result is the following entries in XML_LOAD_LOG:

08:58:50 Running Jobs: AWXML$_483_1. Waiting for Tasks to Finish...
08:58:50 Started 1 Finished 2 out of 10 Tasks.
08:58:50 Running Jobs: AWXML$_483_1.

Don’t set this figure too high! The danger is if you try to set this too close to the total number of CPUs swamps the machine and performance of the load and aggregation drops off. The best starting point is to aim to use half the number of CPUs

And last but not least…
If you look at a maintenance script generated by AWM you will notice a number after the word XML_CLOB. In the example below the number is 190:

dbms_lob.writeappend(xml_clob, 190,

this refers to the number of characters, including spaces, between the next single quotation mark and the last quotation mark at the end of the line:

(xml_clob, 190,' ………………………………………………………………… ');

if modify any of the entries for tags outlined above you need to make sure the number after XML_CLOB correctly identifies the number of characters between the quotation marks. The easiest way to do this is to use Word (Tools->Word Count). If the number is wrong the job will fail but you will not get an error message returned via SQLPlus, only the statement “PL/SQL Procedure Completed Successfully”. If you are really lucky and the issue is with the BuildDatabase tag the job might actually run to completion as the defaults for the missing tags might kick in and allow the job to run. Otherwise, all you are likely to see in XML_LOAD_LOG is the statement such as:

14:13:48 Attached AW SH_OLAP.SH_AW in RW Mode

And nothing else. This means the job managed to find the correct AW and attach it but the rest of your job failed. So check the numbers!

And make sure you detach the AW otherwise all other jobs will be blocked by that session which will have the AW attached in read-write mode.

Tuesday, July 03, 2007

What no GUI? How to Build Analytic Workspaces using scripts

What no GUI? How to Build Analytic Workspaces using scripts

Recently on the OLAP OTN forum there have been a number of questions regarding the use of scripting to control the design, build and maintenance procedures for analytic workspace. There are many reasons for using scripting rather than using Analytic Workspace Manager to build and maintain an AW. Many DBAs prefer to use scripts because they can be scheduled and/or quickly and easily moved from one platform to another without having to install any GUI components. Most likely scenario is that for the production environment a direct connection from a GUI client tool is not allowed. Obviously if you are using Warehouse Builder this is not a problem as OWB will automatically generate deployment and maintenance scripts that do not require execution via the GUI. However, when you are just using AWM things can get a little complicated.

As there is so much information to cover I am going to break this into a series of postings. The purpose of this one is to provide a brief summary of some of the ways to create and define objects within an AW from the command line.

Part 1 : How to create a new AW

It is possible to create an AW from the command line by using the AW_ATTACH procedure. The AW_ATTACH procedure attaches an existing analytic workspace to your SQL session so that you can access its contents. The analytic workspace remains attached until you explicitly detach it, or you end your session. AW_ATTACH can also be used to create a new analytic workspace, but the AW_CREATE procedure is provided specifically for that purpose.

awname IN VARCHAR2,

Notice using the scripting approach it is possible to set the default tablespace for the AW, a feature that is missing from the GUI. For example, the next SQL call creates the AW GLOBAL_PROGRAMS as the last user-owned analytic workspace in tablespace USERS:

SQL>EXECUTE DBMS_AW.AW_ATTACH('global_programs', true, true, 'last', ‘USERS’);

Alternatively you can go a step further and use the AW_CREATE command, which provides more control in some areas and less control in others. The AW_CREATE procedure creates a new, empty analytic workspace.

awname IN VARCHAR2 ,

For example, the next command creates an analytic workspace named GLOBAL_AW. The resulting table AW$GLOBAL_AW will have two partitions and will be stored in the GLOBAL tablespace.

SQL>EXECUTE DBMS_AW.AW_CREATE('global_aw', 'global', 2);

With the AW_CREATE command it is possible to control the number of partitions, which is a very useful feature. But you do lose some control over where the AW is created in the attach list. This is only important if you have multiple AWs attached during the creation process. So why do we have two different procedures to create AWs? I have no idea but I am sure it seemed like a good idea at the time.

Now we have an empty AW, the next step is to start adding objects, possibly some data as well, to the AW. There are two options:
  • Import objects and data from an existing EIF file
  • Import objects based on existing XML documents
Part 2a - Importing an existing EIF file

If you already have an EIF file from an existing AW, the next step is import that file into your new AW. Ideally, the EIF file will contain all the required standard form metadata. If this is not the case and the EIF file is from an older version of Oracle OLAP, or Oracle Express, then you will need to have a different approach. Another blog entry will deal with the upgrade process.

To run the import process you need to place the EIF file in a directory on the file system then make a directory within the database to point to that filestore. We can use OLAP DML commands wrapped within the DBMS_AW.EXECUTE procedure to import the EIF file from the command line. Personally I prefer to make two passes through the EIF file, firstly to create the objects and then to load the data. This goes back to the good old days of Express and ensured efficient storage within the database files. Not sure if this still applies to 10g OLAP schemas, but as old habits die hard, here is the two pass process:

SQL>execute dbms_aw.execute ('aw attach name_of_analytic_workspace');

SQL>execute dbms_aw.execute ('cda scripts');

SQL>execute dbms_aw.execute ('import all from eif file ''global.eif'' dfns');

SQL>execute dbms_aw.execute ('update');

SQL>execute dbms_aw.execute ('commit');

SQL>execute dbms_aw.execute ('import all from eif file ''global.eif'' data');

SQL>execute dbms_aw.execute ('update');

SQL>execute dbms_aw.execute ('commit');

The assumption here is the EIF file being used contains objects defined in standard form. If this is not the case then you will need to run a conversion process to generate standard form metadata.

Some warning signs need to set here : The import process above is going to load data into the analytic workspace with no commit point until the very end. Therefore, you need to make sure you have plenty of storage space within your TEMP tablespace, otherwise it is likely the import will fail with one of a number of error messages relating to memory and/or temp space issues. It is possible to control the update process by using the UPDATE keyword. For example:

SQL>execute dbms_aw.execute ('import all from eif file ''global.eif'' data UPDATE');

Will force an update after each object has been loaded. This is fine until you hit a very large cube when you may need slight more detailed control over the update process. The EIFUPDBYTES option lets you define the volume of data to read between updates. The following command will update after each block of 500,000 bytes has been loaded.

SQL>execute dbms_aw.execute ('EIFUPDBYTES = 500000’

SQL>execute dbms_aw.execute ('import all from eif file ''global.eif'' data UPDATE');

Obviously, by executing continual updates during the import the whole process will take much longer to complete than a single read with a final commit point at the end. But in the end “you pays your money and takes your choice” as they say.

One major problem with EIF imports is that most people create a single EIF file, which creates problems if you need to recover from a failed import process. Why? Because you have to start right at the beginning and import the whole thing all over again. Or you would think! The EIF file is in fact quite smart in terms of its contents. It is possible to get a list from the file of the actual contents so in reality it is possible to recover from a failed import process by checking what was imported and now resident in the AW and what is left to import from the EIF file. Requires a bit of OLAP DML skill…something for another blog posting perhaps?

Part 2b - Importing an existing XML definition

Each of the create procedures described above generates a completely blank AW. There is no standard form metadata contained within the resulting AW. Before you can import an XML template into the AW, standard form metadata needs to be added. The only way to do this seemed to be to use AWM (or right your own Java procedure to call the AWXML API). After a bit of digging I did discover the following an undocumented features called create_db_stdform. This is referenced in older versions of the OLAP documentation set and the syntax is as follows:
  • call create_db_stdform('my_aw_name','rw')

There is more information on Mark Rittman’s excellent blog website, where one of his posts covers migrating from previous versions of Express to 9i OLAP. The full article is on Mark’s website at:


To execute this command we can use the normal process as follows:

SQL>execute dbms_aw.execute ('call create_db_stdform(''global_aw'' ''rw'');

Now the AW will appear in AWM under the Model View and it is now possible to start creating dimensions and cubes. However, the purpose of this blog is to explain how not to use the GUI!
The next stage is to use existing XML template documents to define the objects within the AW. As far as I can tell there are no command line tools to create an AW XML definition for an existing object in an existing AW. The only way to do this is to use the Analytic Workspace Manager and the right mouse click menu options, or use the Generate feature in Warehouse Builder (Unless someone knows how to do this from the command line?).

So, assuming you have all the required XML files to create the dimensions and cubes for your new AW, the process for importing these files is very simple. There are to procedures you can use:

    • creates all or part of a standard form analytic workspace from an XML document stored in a text file.
    • creates all or part of a standard form analytic workspace from an XML document stored in a CLOB.
The EXECUTEFILE function is the easier to use of the two functions. Assuming you have generated an XML definition of your AW you will also need to define a directory within the database that points to the location of the XML file(s). In the example below the directory SCRIPTS was created and the definition for AW is contained within the file GLOBAL.XML and the whole command is wrapped by the DBMS_OUTPUT.PUT_LINE function so that the "Success" message returned by EXECUTEFILE is displayed:

SQL>execute dbms_output.put_line(dbms_aw_xml.executefile('SCRIPTS', 'GLOBAL.XML'));Success

The EXECUTE approach is slight more complicated but achieves exactly the same result. So why do we have two ways of achieving the same thing? Who knows, as I said before I am sure there is a very good reason?

--Use DBMS_LOB package to create a clob


clb CLOB;

infile BFILE;

dname varchar2(500);


-- Create a temporary clob


-- Create a BFILE use BFILENAME function

-- Use file GLOBAL.XML in the SCRIPTS directory object.


-- Open the BFILE

DBMS_LOB.fileopen(infile, dbms_lob.file_readonly);

-- Load temporary clob from the BFILE


-- Close the BFILE


-- Create the GLOBAL analytic workspace




-- Free the Temporary Clob








and there it is. A new standard form analytic workspace created and populated with base objects (dimensions and cubes) without using a GUI. In the next blog posting I will explain how to maintain cubes and dimensions without using the GUI.