Saturday 31 December 2016

FDMEE and the REST is up to you – Part 1

I have previously written a couple of posts around FDMEE Web Services and at the time of writing them the options for on-premise did not include any RESTful services.

The first part covered Web Services available in FDMEE using the SOAP protocol which requires the arduous task of configuring Oracle Web Services Manager (OWSM), the amount of effort involved has never made this a viable option and personally I prefer REST over SOAP for ease of use.

The post also covered an alternative way of executing processes in FDMEE by calling a Java Servlet which behind the scenes is basically the same method the command batch scripts use.

In the second part I went through the REST resources available at the time for Data Management in PBCS.

With the release of FDMEE 11.1.2.4.210 many of the EPM cloud features have finally been pushed down to on-premise, the readme for the FDMEE 210 PSU does not include any reference to the REST API and the full documentation is yet to be released.

The good news is that the REST API has made its way down to on-premise and since I wrote the original piece on the cloud there have been some additional resources added to the API.

I feel this is a good opportunity to go through what is available in terms of the REST API for both on-premise and cloud.

In this first part I will show how simple it is to execute data load rules and batches using the REST API, now I did cover this previously for the cloud and it is pretty much the same principal for on-premise but I thought I would go over it again to clear any confusion.

In the examples I will use a REST Client (boomerang for Chrome) and for scripting PowerShell, I have said this before but I have only picked them because I find them easy to use and good for demonstration purposes, the beauty is there are many different clients and scripting languages that can take advantage of REST so just use the one you are most comfortable with.

The URL structure for accessing the REST resources is:

http(s)://<webserver>:<port>/aif/rest/<api_version>/<path>

The API version is currently V1 for both cloud and on-premise

The path identifies the resource and for running rules and batches the path is jobs

To test the resources are accessible using a REST client a GET request can be made to:

http(s)://<webserver>:<port>/aif/rest/V1


Before sending, the header of the request will require authentication information containing the username and password of a provisioned FDMEE user.


The REST client automatically encodes the credentials into base64 and adds them as a basic authorization header.


After sending the request, a valid response should contain the REST API version information in JSON format.


So let us go through an example of running an FDMEE data load rule.

I have the following data load rule which has a source file and target planning application.


Another of the new features in the 210 PSU is the ability to load non numeric data, once again I have covered this is in the cloud but this is a good time to test out the functionality for on-premise FDMEE.

In the target options of the load rule there is the option to set the load method to “All Data Types” and the date format for date data.


For this example, I have created a Text, Date and Smart List member in the target planning application.


The source file has the three members and data with the equivalent data type


There is also a form to verify that the non-numeric data has been loaded correctly.


Now that everything is in place I can look at using the REST resource to execute the load rule.

The resource uses a POST method and has a number of input parameters which are required to be in JSON format in the body of the request.

The parameters are:

jobType which for load rules will always be “DATARULE”

jobName will be the name of the data load rule.

startPeriod will be the first period for the data.

endPeriod will be the last period the data.

importMode defines how the data is imported into FDMEE. the possible values are: APPEND, REPLACE, RECALCULATE, NONE

exportMode defines how the data is loaded into the target application, the possible values depend on the target application.

For planning these are STORE_DATA, ADD_DATA, SUBSTRACT_DATA, REPLACE_DATA, NONE

If your target application is HFM then is where you hit an issue, as the REST resources have been pushed down from the cloud and there is no HFM in the cloud then all export options are not available, well I think they are not there, I will update this post if I find them or if anything changes.

The available values are REPLACE, MERGE, NONE.

This means that accumulate and replace by security are not there, if you need to use those export values then as a workaround you can create a batch in FDMEE and use the REST resource to run the batch.

fileName (optional) if not supplied the file which is defined in the data load rule will be used.

For the data load rule I am going to execute the parameters translate into the following:


The request can be sent and a response will be returned with parameters and values in JSON format which provide information about the job.


The jobID parameter is the equivalent to the process ID in FDMEE.

jobStatus will contain either “RUNNING”, “SUCCESS”, “FAILED

status will be one of the following values: -1 = in progress; 0 = success; 1 = error; 2 = cancel pending; 3 = cancelled; 4 = invalid parameter

The job status has been returned as running, to keep checking the status a GET request can be made to the URL held in the href parameter.


The job status has now been returned as “SUCCESS” so the full process is complete.

If the job ID is removed from the URL then all job statuses will be returned.

If you take a look in process details in the FDMEE UI then you can see that the process ID and job ID do match and the status is the same.


Back to my form in planning and the non-numeric data has been loaded correctly, so that is two pieces of new functionality tested in one go.


Converting this into a script is a simple task, with not much effort and a small amount of code an FDMEE data load rule can be executed.


To keep checking the status the URL in the returned href parameter can be stored and then a request made using the URL.


So now rules can be easily executed using the REST API.

I know I am going to get asked about uploading files to the FDMEE directory because it is possible using REST in the cloud, currently this is not possible as the upload/download functionality in the cloud uses the migration REST API and this has not made it down to on-premise, I am not sure if it will or not because of the difference between the directory structure in the cloud compared to on-premise, we will see.

Really it shouldn’t be too much of a problem though with on-premise as the FDMEE directories are on a file system so there are many ways to transfer files using scripting languages.

Moving on to the next REST resource and that is the ability to run batches.

For my example I have the following batch created which just executes the same load rule as the previous example.


The REST resource for running batches is the same URL as for running data loads and once again with a POST method.

Only a couple of input parameters are required in the body of the request to the resource.

jobType which for load rules will always be “BATCH”

jobName will be the name of the batch.

For my example this translates to the following request in the REST client:


Just like with the data rule a response is returning containing all the job information.


The response will only contain the details of the main batch process and not the ID of the jobs that are part of the batch, this is not really a problem as you would only really be interested in the status of the overall batch process.


As the job ID of the batch is returned and you will already know the order of the jobs then you could easily check the details of each of the jobs in the batch.


In terms of scripting then the code would be pretty much similar to running a data load and the only difference being less parameters are required in the body of the request.


Using a script is also nice and simple way of monitoring the status of FDMEE processes without having to log into the UI.


One thing that would be beneficial that is not included in the REST responses is the start and end times of the processes, maybe Oracle will include them at some point in the future.

Right, I am going to leave it there for today and in the next part I will cover importing/exporting mappings and running reports using REST.

Monday 19 December 2016

FDMEE/Data Management - loading data by multiple periods and years

Over the past six months there have been a number of changes in the way and the type of data you can load through Data Management in EPM Cloud, I am only referring to the cloud as these enhancements have not yet made it down to on-premise FDMEE.

Update 23rd December 2016 - All the functionality in this post is now available on-premise in FDMEE PSU 11.1.2.4.210

In the PBCS 16.07 update document there was the following new feature:
“Data Management added a new All Data Types Data Load method which enables you to load numbers, text, Smart lists, and date.”

In the 16.09 release there was:
“The text-based data load feature enables users to load text data, dates, and smart lists to applications that support these data types.”

You tell me what the difference is between 16.07 and 16.09 because they look the same to me :)

In the 16.10 release there was:
“Data Management now supports a multi-column load of numeric data for any dimension. Prior to this update, Data Management supported only the loading of multi-period column data. In addition, a multi-period load using the header record is now available.”

Basically I have covered the above functionality in a previous blog which you can read all about here

The previous post was aimed at loading text data but the same logic can be applied for numeric data, the only difference would be the file type selection in the import format.

If you look at the import format in the 16.07 release you will see that the “All Data Type” was introduced for fixed, delimited and multi column source files.


At that point there was “Multi Period” available for numeric data where you could only specify period columns in the import format mapping, fast forward to the 16.10 release and you will see that it was replaced with “Multi Column – Numeric Data”.


Currently in the on-premise 11.1.2.4.200 FDMEE version you only have the following file type options:


I am sure this will be change in the future, maybe when .210 patch finally lands.

Update 23rd December 2016: Functionality now available on-premise with FDMEE 11.1.2.4.210

Import format file type options from FDMEE 11.1.2.4.210:



Moving on, there is now another new way of loading data which arrived in the 16.12 cloud release and FDMEE 11.1.2.4.210:

EPM Cloud update document:
“Data Management now supports Oracle Financial Consolidation and Close Cloud “Period” dimensions as columns in a data file. If you have data for multiple periods in a single file, then you can include the year and period on each row of the data file that gets loaded to the target application.”

FDMEE 11.1.2.4.210 readme:
"Data files can include data for multiple periods. To support this, columns for period, year and period number are available to add in the import format."

Notice that the functionality in the EPM Cloud update document is only specified for FCCS, how about PBCS? Well I will get on to that later.

So now it should be possible to specify the period and year in the rows of a source data file, to test out this new piece of functionality I will go through a simple data loading example.

I am going to start out with the following source file which has a column for period/year and multiple periods in the rows.


The objective is to load this file to Data Management, map the data to valid members and then load the data to a target FCCS application.

I have created a form to verify whether the data has been correctly loaded.


In order to map the period/year in the source file to the target application I have created source period mappings and assigned them to a calendar which will be selected in the data load rule.


On to the import format and before the 12.16 release there were the following options available when selecting the Add button.


Now there is a new drop-down option of “Source Period Row” where Year, Period or Period Number can be added.


These will then be added to the import format grid and the field number can be mapped to the source file.


In the data load rule the import format was select, the source file was uploaded and selected, the period mapping type was set to explicit and the calendar that was created earlier selected.


I am not going into the data load mappings as they were straight forward explicit mappings for account and entity members contained in the source file.

To load the data, you execute the rule from the data load rule screen as the workbench is currently fixed by the period/year that is set in the POV.

Even though my source file has data from January to December 2017 I am only going to select to load from January to June 2017.


Once the rule has been executed process details confirms the load from source to target was successful.


In the process steps I have no idea why the months are all out of order because if you take a look in the process log you will see they are processed in the correct order.


In the log you can also see that the rows that are not in between the start and end period are rejected in the data load.


In the target options of the data load rule I had selected not to purge the data file that is created by Data Management and then loaded into the target FCCS application.

It is possible to download the file from the outbox to view the format of the file that is loaded to the target application.

If you have ever been involved with on-premise planning, then you will see that the file is in the outline load utility format and viewing the process log that it is using the utility to load the data.


The form I created earlier is now correctly displaying the data that was loaded.


Using this new method certainly makes life much easier to load data across multiple periods and years.

In the 16.12 cloud update document, it was stated that this new functionality was only available for FCCS but I was interested to know if it is available in PBCS.

I created an import format against a PBCS application and the “Source Period Row” option was available.


As the option is available then this would suggest it is possible to load from a source file containing periods and years in the rows.

I had to test out whether the process would be successfully against a PBCS application.

Once again I started out with a simple source file spanning multiple periods and years.


The period/year were added and mapped in the import format.


I am not going to show them but I created source period mappings, a location and data load mappings.

The data load rule was set up in the same as the previous example.


The rule was executed against with a start and end period to match the source file contents.


The full process of importing, mapping and exporting to the target was successful.



For demonstration purposes I created a form to show that the data was correctly loaded to the target PBCS application.


So even though PBCS was not mentioned in the update documentation the functionality is there and it is possible to load from a source file where the periods and years are defined in the rows.

Update: This functionality can be used against EPM cloud and on-premise.