Sunday 29 November 2015

Essbase 12c for BI – Part 2

I was hoping to write up this second part of the Essbase 12c for BI much sooner but I just couldn’t find the time.

In the last post I covered the main noticeable changes to Essbase in 12c which is currently only available bundled with OBIEE, in this post I want to continue the theme and cover how clustering Essbase differs in 12c as OPMN is no more.

I know this topic will not appeal to everyone but it is worth looking into how Essbase is clustered in OBIEE 12c and it is a glimpse into how it may function in EPM 12c, nobody knows for sure when 12c will be available for EPM and Oracle seem to be focused only on one thing at that moment so it may be a long wait.

In 11.x Essbase was clustered in an active/passive configuration and managed by OPMN which was mainly aimed at *nix type deployments, for Windows you have the additional option of using Failover Clusters though annoyingly OPMN was still in the mix to manage the agent process.

I think OPMN divided many opinions and personally I have never found it great for managing Essbase and if you were not clustering Essbase or clustering with Windows it didn’t even really make much sense it being there and ended up causing more confusion than anything.

In 12c OPMN has been dropped and everything is now being managed by the WebLogic framework and as I covered in the last part Essbase is deployed as a WebLogic managed service and the Essbase Java agent replaces the C based one though there are no changes to the Essbase server.

In 12c the Essbase agent is still using an active-passive clustering topology using a WebLogic singleton service, so if you were expecting active-active then you will be disappointed.

A singleton service is a service running on a managed server that is available on only one member of a cluster at a time. WebLogic Server provides allows you to automatically monitor and migrate singleton services from one server to another.

This is what the documentation has to say about clustering and failover for Essbase 12c.

High availability support for Essbase is a function of the WebLogic Server interface on which the Essbase Java Agent runs. The agent and the Essbase Server continually register “heartbeats” with the database, to confirm that it is still actively running. If a server process does not return a regular heartbeat, the agent assumes there is a problem, and terminates the server process.

Essbase failover uses WebLogic Server to solve the problem of split-brain. A split-brain situation can develop when both instances in an active-passive clustered environment think they are “active” and are unaware of each other, which violates the basic premise of active-passive clustering. Situations where split-brain can occur include network outages or partitions.

Lease management built into WebLogic Server ensures exclusivity of instance control. There are two types of cluster migration techniques you can select:

Consensus leasing - This is the default type for the WebLogic Server cluster on which Essbase is deployed.
Database leasing


It is worth picking up on a few of the points made in the statement above, as the Essbase agent operates in an active/passive configuration it uses a singleton service so that it is only active on one member of a WebLogic cluster at a time and this is managed using consensus leasing.

To try and explain WebLogic leasing here are some more excerpts from the documentation :

Leasing is the process WebLogic Server uses to manage services that are required to run on only one member of a cluster at a time. Leasing ensures exclusive ownership of a cluster-wide entity. Within a cluster, there is a single owner of a lease. Additionally, leases can failover in case of server or cluster failure. This helps to avoid having a single point of failure.

Consensus leasing requires that you use Node Manager to control servers within the cluster. Node Manager should be running on every machine hosting Managed Servers within the cluster.


In Consensus leasing, there is no highly available database required. The cluster leader maintains the leases in-memory. All the servers renew their leases by contacting the cluster leader, however, the leasing table is replicated to other nodes of the cluster to provide failover.

The cluster leader is elected by all the running servers in the cluster. A server becomes a cluster leader only when it has received acceptance from the majority of the servers.

 
Essbase also has new functionality in 12c to register heartbeats into a set of database tables to confirm the agent or server are actively running.

After starting the Essbase agent you will see the following entry in the log to confirm the heartbeat has started.

[SRC_CLASS: oracle.epm.jagent.net.nio.mina.server.NetNioCapiServerMain] [SRC_METHOD: startAgentHeartBeatDaemonThread] agent.heartbeat.thread.started

There is a new essbase configuration file setting that controls the interval of the heartbeat.

HEARTBEATINTERVAL

Sets the interval at which the Essbase Java Agent and the Essbase Server check to confirm that the database is running.

The default interval is 10 seconds.

Syntax

HEARTBEATINTERVAL  n

The interval is set to 20 seconds in the essbase.cfg when Essbase is deployed in OBIEE 12c.

There are two tables that are part of the heartbeat, one for the agent and one for the server.


The agent runtime table holds information on the current active host and includes last modified date column will be updated every n seconds depending on the heartbeat interval setting.


The server runtime table which holds information on all the active applications and the last modified date will be updated in line with the heartbeat interval.


There are two views which additionally calculate the number of seconds since the last heartbeat.





If an Essbase application crashes and the esssvr process is lost then you will see the following entries in the log:

[SRC_CLASS: oracle.epm.jagent.process.heartbeat.ServerHeartbeatHarvestor] [SRC_METHOD: harvest] server.heartbeat.missed
[SRC_CLASS: oracle.epm.jagent.process.heartbeat.ServerHeartbeatHarvestor] [SRC_METHOD: harvest] removed.pathalogical.server

The server runtime table will then be updated to remove the server entry.



If the agent crashes and there are active applications then in Essbase application logs there is entry:

Essbase agent with ID [PRIMODIAL_AGENT_ID], is not likely to be running because there is no heart beat since [56] seconds. This server instance will be gracefully shutdown.

Each active Essbase server process will be stopped and records removed from the server runtime table.

Right, now on to clustering Essbase and as Essbase is embedded into the BI managed server this means scaling out by adding a new machine to the domain to end up with the following architecture.



Usually you would then front the managed servers by a HTTP server like OHS or a load balancer.

From the diagram you can see that there is a singleton data directory which by default is DOMAIN_HOME/bidata

In terms of Essbase this is where the equivalent of the arborpath location.



Now that Essbase is going to be clustered across two machines then the directory will need to be accessible across both nodes, for Essbase you want the shared file system to be running on the fastest possible storage.

The singleton data directory (SDD) is defined in:


DOMAIN_HOME/config/fmwconfig/bienv/core/bi_environment.xml



The directory should be updated to the shared file system.


The contents of the bidata directory are copied to the shared directory.



Next install OBIEE 12c on the second node using same ORACLE_HOME.

On the master node there is a script available which clone the machine in the BI domain to a new machine and then pack the domain.


DOMAIN_HOME/bitools/bin/clone_bi_machine.sh|cmd [-m ]

This will create a template archive (.jar) that contains a snapshot of the BI domain.

Copy the template archive to the second node and run the unpack command to create the DOMAIN_HOME


unpack.sh|cmd –domain=DOMAIN_HOME -nodemanager_type=PerDomainNodeManager

For example:

Re-synchronize the data source on new machine by running the script:

DOMAIN_HOME/bitools/bin/sync_midtier_db.cmd

On the second node start node manager and on the master node start BI components, all done.

In the WL admin console you should see multiple machines and a BI managed server running on them which are part of the BI cluster.



From an Essbase perspective the web application should be running on both nodes in the cluster but the agent will only be active on one of them.


So how do you know which is the active node, well there are a number of ways.

You could check the jagent.log and check for


[SRC_CLASS: oracle.epm.jagent.net.nio.mina.server.NetNioCapiServerMain] [SRC_METHOD: startMain] Oracle Essbase Jagent 12.2.1.0.000.150 started on 9799 at Sun Nov 29 14:51:33 GMT 2015

You could check the Essbase runtime database table or view.



There is also the discover URL which is used when connecting to a clustered essbase instance by such methods as Maxl.


If you have been involved in Essbase clustering in 11.x then you will have used a similar method going via APS.

The discovery URL will be available on each active managed server node.



Usually you would have a HTTP server or load balancer in front of the managed servers so only one URL is required

The discovery URL works by querying the agent runtime table to find the active node so when using Maxl you would connect to the URL and then be directed to the active agent.

To mimic what Maxl is doing then all you need is post a username and password to the agent servlet.

For example using a rest browser plugin you can post the following



It also requires a header GET_ACTIVE_ESSBASE_NODE with a value of ON.


The response will include the server and port of the active Essbase agent.


If you look at the jagent log then you will see the request to return the active node

[SRC_CLASS: oracle.epm.jagent.servlet.EssbaseDiscoverer] [SRC_METHOD: getActiveJAgentNodeFromDB] essbase.active.node.found

There is another method to find the active node through WL admin console but I will get on to that shortly as it requires additional configuration.

Right so let us test failover of the Essbase agent, currently the agent is running on FUSION12 and I want to failover to OBINODE2.

The bi managed server is shutdown on FUSION12.



Check the jagent.log:


[SRC_CLASS: oracle.epm.jagent.net.nio.mina.server.NetNioCapiServerMain] [SRC_METHOD: startMain] Oracle Essbase Jagent 12.2.1.0.000.150 started on 9799 at Sun Nov 29 16:08:24 GMT 2015

The agent runtime database view:


The discovery URL:


So each method confirms the Essbase agent has failed over.

This is fine but what if you want both managed servers to be active and you want to move only the active Essbase agent across servers.

This can be done within the WL admin console using the Singleton Service migrate functionality, the problem is this has not been configured by default so I had to add the configuration into the console.



Once configured it is possible to migrate the agent across servers.


Simply select the server to migrate to.


The agent should now be active on bi_server1 – FUSION12.


To confirm this once again using the various methods.

jagent.log:

[SRC_CLASS: oracle.epm.jagent.net.nio.mina.server.NetNioCapiServerMain] [SRC_METHOD: startMain] Oracle Essbase Jagent 12.2.1.0.000.150 started on 9799 at Sun Nov 29 16:33:55 GMT 2015

Agent runtime database view:



Discovery URL:



I was also looking for an automated way of migrating the singleton service which I am sure is possible as I found the mbean responsible for it but it looks more complex than I first thought it would be.


I am sure it is possible by some means of scripting and one day I might actually look into in more detail but for now I have not got the time or energy.

So that is my Sunday gone, I hope you found this post useful, why oh why do I bother :)

Sunday 1 November 2015

Essbase 12c for BI: A glimpse into the future of Essbase for EPM

OBIEE 12c was released recently and like with OBIEE 11g it comes bundled with a suite of Essbase and EPM products, the noticeable highlight from an EPM perspective is that it is the first sight of Essbase 12c.

It is worth stressing that this Essbase for BI and not EPM, they are currently different code lines and by the time EPM 12c is released which could be 2017, Essbase 12c may have gone through many more changes.

Though what this release for BI does provide is a glimpse into a couple of big changes to the architecture behind Essbase, the two standout ones are:
  • The Essbase C agent is now replaced by a java agent.
  • The Essbase security file is no more and finally moves into the RDBMS
If you have been around Essbase for a long time then you will agree these are pretty fundamental changes.

I am going to try and cover these in a little more detail but as Essbase is not standalone and bundled with OBIEE then there are a few restrictions.

When you carry out a default install of OBIEE 12c the EPM/Essbase components are automatically installed.


At the point of configuration there is an option whether to include the Essbase components


The EPM components are all deployed into the same WebLogic managed server as OBI and these include
Essbase Agent.
  • Cube Deployment Services.
  • Workspace (EPM application with limited functionality).
  • Calculation Manager (EPM application also known as Allocations Manager).
  • Hyperion Provider Services (APS).
This means the Essbase C Agent is replaced with the Essbase Java Agent which runs as a web application.

It doesn’t look like it is currently possible to deploy the Essbase web application to its own managed server which is a bit of pain and also I am not a fan of trying to deploy too bunch into one managed server, this is similar in the EPM world to deploying web applications to a single managed server which I usually steer away from for a number of reasons.

There is an interesting statement in the documentation “Essbase Java Agent offers improved concurrency and networking capabilities over the classic C Agent”
Does this also mean the Java agent is going to offer more stability and less likely to freeze? Until it is possible to deploy Essbase to its own managed server away from the other BI components then I feel there are too many factors involved to be able to prove this.

Once OBI has been configured and you take a look at the deployments within the WebLogic admin console you will see the Essbase web application.


If you are wondering CDS stands for Cube Deployment Services which forms the Essbase Business Intelligence Wizard, this is a web based tool for building Essbase cubes.


It gives the feel of a very simplistic version of Essbase Studio and provides functionality to deploy ASO cubes from BI models within an RPD.


In OBIEE 11g it included EAS and Studio but these have now been dropped and replaced with CDS, if you are from the EPM world then I doubt you will be too impressed with this functionality but I suppose if you are looking to build straight up aggregation ASO cubes this simplifies the process.

I am not going to go into any more detail on CDS as maybe that is for another day or I am sure it will be covered by others.

Anyway back to Essbase, once the BI managed server has been started the Essbase Java Agent should be up and running.

One of the many reasons why the Essbase C agent has been replaced the Java agent is that in 12c there is no longer OPMN and the WebLogic framework pretty much runs the show.

I have never really believed in Essbase and OPMN unless you are clustering on a *nix system, ok you can get OPMN to restart Essbase if it crashes but the amount of drawbacks outweigh any benefit, it would have been nice to be given the option to deploy or not.

It is all change in 12c and we will have to see if the WebLogic framework handles it any better.

No longer will you see an Essbase process running and you will only the java process which is running all the deployments in the WebLogic managed server.


If you start an Essbase application then you will notice that not everything has moved to Java and C is still being used for the Essbase Server process, I suspect the long term goal would be for the server to also move to Java.

The essbase.log has is engraved into anybody that uses Essbase, in version 11 there was also the ODL version which made it confusing as there were now two Essbase logs, in 12c the essbase.log has gone and moved into the logs directory of the managed server.

jagent.log is the replacement.


The contents of the log are very different to what you have been used to with old Essbase log.

To verify the agent is up and running you should be looking for:

Oracle Essbase Jagent 12.2.1.0.000.150 started on 9799 at Fri Oct 31 00:45:47 GMT 2015

OBI 12c uses a range of ports starting at 9500 by default and the Essbase agent is assigned 9799, though saying that the Essbase server range still starts at 32768.

What is a bit concerning is the log seems to be full of a repetitive error:

exception.during.capi.request[[java.lang.NoClassDefFoundError: Could not initialize class oracle.epm.jagent.logging.LoggerHelper

This type of error usually means that a class is not contained in the java classpath but the class in the error message is definitely in the path, it is a bit annoying as it could be important messages that are not being logged, I am going to put it down to being at the moment.

You can also check the status of Essbase in the WebLogic admin console or using WLST.


As Essbase is running as a web application a page should be returned for the agent over http(s)


Though technically speaking it is possible that the web application could be up and running but there is an issue with the agent which is preventing it from starting.

The application logs are a couple of directories below the jagent log


As the Essbase server has not changed you still have the old style log and the ODL log so much for progression :)

Some of the Essbase components are still installed under a products folder similar to the current EPM structure.


The Essbase ARBORPATH and location of the Essbase applications by default is /bidata/components/essbase, this is defined by an XML file which I will cover in more detail when I go through clustering.


As the Essbase Server (ESSSVR) has not changed there are no shocks around how the applications operate and store metadata/data.

The essbase configuration file (cfg) has not gone away but I suppose you were thinking that it was going to be in the above bin directory, don’t be silly you are used to that so it is time for a change :)


All the BI configuration files are situated under the fmvconfig directory and this is where the essbase.cfg sits, remember this is Essbase for BI and it doesn’t mean it will apply to EPM though I suspect it might.

The cfg file is relatively the same with the noticeable addition of the JAGENT_ID


PRIMODIAL_AGENT_ID just rolls off the tongue :)

There is a long list of cfg settings that are no longer relevant in 12c and these can be found in the documentation.

In Essbase for BI security is not controlled by Shared Services and falls into Enterprise manager using Oracle Platform Security Services (OPSS) so that is why OPSS is set as the authentication module, I am not going to go into as it is hideous and you can read all about it here

So how about starting and stopping Essbase.

As Essbase is deployed in the BI WebLogic managed server then once the managed server is started then Essbase should also be started, the managed server can also be controlled by the WebLogic node manager.

Alternatively if you only wanted to start/stop the Essbase then this could be done from the WebLogic console.


This could also be done using WLST using startApplication('ESSBASE') and stopApplication('ESSBASE')

As I mentioned earlier EAS has been removed from OBIEE 12c though technically you can still use EAS in an EPM environment to connect to Essbase 12c, no doubt doing this is not supported and you will encounter the following error if you try to implement any changes.

Error: 1051734 Operation not supported, authentication is managed by Fusion Middleware in this version of Essbase

Luckily there is still Maxl and even ESSCMD still lives on in 12c, both can be accessed from:




Connecting to essbase can still be achieved in the same way as previously in Maxl.


The documentation recommends going through the discovery URL which is a similar to the method currently available in EPM via APS.


This method of connecting becomes more relevant when using Essbase clustering as it will determine the active agent node and then connect to it.

Not all the same functionality is available in 12c due to the changes and these are covered in the documentation.

It is definitely not possible to shut down Essbase using Maxl.


Moving on to the next big change in 12c and one I am sure many will welcome is the security file finally being moved into the RDBMS.

The security file has always been troublesome and I have lost count the number of the times I have seen it being corrupted and having to resort to backup.

From Essbase 11.1.2 some of the elements from the security file like users and groups moved into the Shared Services database but unfortunately not all.

The documentation I feel is being a bit biased to Oracle database in its description.

“The Essbase RDBMS schema is the Oracle relational database that stores Essbase application and database metadata”

As the agent and server are now using different technologies then so are the methods of connecting to the RDBMS schema.

“The Agent connects to the Essbase RDBMS schema using EclipseLink, an open source mapping and persistence framework. The Essbase Server connects to the Essbase RDBMS schema using ODBC DataDirect drivers.”

The following set of tables form part of the Essbase RDBMS schema


Many of the table names are easy to relate to and it doesn’t take long to understand how are they are being populated, a few of the tables are still a bit of mystery to me at the moment.

Let us take a couple of examples and to start with create a new substitution variable.


You will not be surprised to realise the information is then stored in the table ESSBASE_SUBSTITUTION_VARIABLE, depending on the scope of the variable you may need to also bring in the ESSBASE_APPLICATION and ESSBASE_DATABASE tables if you want to view meaningful information.


Now for another example by creating a new filter.


Once again not difficult to figure out the driving tables this time are ESSBASE_FILTER and ESSBASE_ROW

A simple SQL query can return the filter information from the database.


Moving the contents into the RDBMS certainly opens up more possibilities around querying information than was previously available when it was held in the security file.

I am sure some of you are thinking that now it should be possible to directly populate the tables instead of using Maxl or an API, the problem there is that the data is cached in memory and if changes are made to the tables these will not be seen as active until restarting or the cache is somehow refreshed. It makes sense that it is held in memory as in theory it should provide faster access and less activity against the database.

I was going to cover the changes to clustering Essbase now that OPMN no longer exists but I think I will leave it for today and will cover it in the next part.