Essbase Analytic Link (EAL) for HFM Nope.

•March 31, 2015 • Leave a Comment

Essbase Analytic Link for HFM is a pretty cool tool that allows customers to spin off Essbase cubes based on HFM applications.  Its a great way to get near real time Essbase trickle cubes continuously built from HFM data.  You can also merge HFM data and other systems to create advanced analytic KPIs with greater detail.  Changes in HFM are automatically pushed to Essbase in seconds without putting to much burden on HFM itself.


Now – we are very used to having the EAL developers far behind the EPM developers.  As a matter of fact, the most current EAL version is, and that .400 patch-set finally got EAL supported with and it was just released 4 months ago!

That’s a problem.  But here is my biggest concern:

If you remember from my previous blog post on the extreme HFM platform changes, you will know the entire API and structure of HFM has ripped out and replaced.  So it’s obvious that many, if not all, of the calls that EAL uses with HFM are no longer there.  EAL is not only not supported with, it will certainly not work at all.

So here’s the question:  If it takes the EAL developers a year to get up to compatibility with modern EPM versions simply going from to… how long do you think it will take Oracle to produce a new version of EAL that will support an entirely new HFM architecture?

Answer from Oracle:  “We do not have a release target date”

The IT side take:   Don’t hold your breath. Historically it takes a long time for EAL to get up to speed.  However Oracle does see this as a pressing issue for adoption, so perhaps we will see quicker response.

Regardless, for the time being, this is forcing some customers to not upgrade past and make some very hard decisions.

For example, one customer needs to use EAL to spin-off Essbase cubes from HFM.  However they would also like to purchase the new Tax Provisioning module available in  Seems they cannot have both… they will need to either implement EAL in and not purchase the Tax Module, or implement with the Tax Module, and not use EAL.

I will update when I get more information….

2015 Buzzwords and Trends in EPM

•March 23, 2015 • 1 Comment


Want a reality check?   The Hyperion brand is 30+ years old. What a ride it has been.

If you have been watching lately you will know that Oracle has been hosting many EPM Days all across the nation. These are fantastic regional events that bring local people together to network and get some insight with some cutting edge workshops and EPM roadmaps.

During this EPM Days blitz, Oracle released Oracle’s Enterprise Performance Management Top Trends that outlined their take on the latest EPM trends… which is more of a of a multi-paged info graphic.

Between these events and recent publications, I have seen quite a reoccurring theme in Oracle’s EPM market strategy and roadmap, along with the obligatory bombardment of the latest and new buzzwords of the day. A few stick out to me….

1.) “Big Data”

The IT Side Take:

  • “90% of todays data has been generated in the last 24 months”
  • “Exponential growth of data being collected”
  • “We now have more data now than we had before”
  • “We have more data than can be realistically be processed with normal commodity hardware”

Ok we get it… but is there any point of time in history where we would not be able to say the same thing? Do we really think this is a new problem? It’s not… “Big Data” is just the new buzzword… actually not so new… it was coined in 1999.

We have always known about increasing data volumes.. it was just called different things.. In the 80’s they used terms like “Data Explosion” and in the 2000’s words like “cyberdata” or “cyberinfrastructure.” A lot came for the expectation of sensor data and RFID. The term big data means the same thing as the rest of them, but more aligns with other common phrases such as big oil and big tobacco.

And to be honest, the solution to this data volume problem has always been relatively the same as well. To solve problems with large amount of data, the solution was to break up the data up into chunks, process it separately and in parallel, then put it back together in a reportable format. Terms such as grid computing and distributed computing have been around forever and were invented for exactly this issue.

Terms such as Neural Networks and Machine Learning have been simply replaced with “big data discovery” and “predictive Analytics”…. All meaning about the same thing.

However, it’s the marketing behind it that is makes it new and shiny. Somehow the high tech marketing industry has convinced people that the large data volume issue was discovered and solved with the advent of Hadoop.

2.) “Internet of Things”

The IT Side take:

There will obviously be more Internet connected devices than the human population in very short order. Mobile devices are expected to grow to 12.5b by 2020. Many people are carrying around multiple mobile devices. Houses, appliances, TVs, cars, watches, etc are becoming connected with their own IP addresses. But is this new? Didn’t we know this was coming in 1998 when we formalized the IPV6 to allow for 2128 IP addresses?

Will these devices make our lives more convenient? Of Course.   But, more importantly this gives companies (and governments?) many more things to track your behavior… See “1. Big Data”

3.) “Digital Disruption”

The IT Side Take:

Simple term implying technological innovation can come quickly and disrupt markets the same way it always has and always will. The term implies you should reinvent your business to embrace or even lead rapid market change, referencing stories about ATMs, smart devices, and Uber. But has there ever a time in history where that would not be sound advice? It’s a bit over used and has lost it’s meaning.

I don’t mean to get caught up on the buzz words, but it’s not just me. Check out

Gartner’s Hype Cycle for emerging technologies. There is even a graphical representation on the buzzworthiness of modern fads. Priceless…

Buzzwords aside, I think we can glean some sense of reality and direction, even from Oracle buzzwords and other industry marketing. The main themes in EPM that are repeated at these conferences and meetings: Big Data, Cloud, Mobile, Social collaboration, etc.

Personally, what I try to do is read between the lines and understand how that relates to our lives as EPM/Hyperion users, administrators, and analysts.

What I have learned is that undoubtedly financial analysts of the near future will be “DFO”, or Digital From Birth. The idea of not having continuous Internet based access to information, portable data, and instant collaboration is unfathomable. And those that are going to succeed will deliver those capabilities to those financial analysts and will gain competitive advantage by doing so.

The world is evolving… decisions cannot be made by guts or instincts… its with data. The key is to drive data into daily decision making. The talent of tomorrow will have vastly different expectations of analytics and how to interact.

Big data for finance

  • Embrace predictive analytics to improve capital allocation and operations
  • Collaborate with operations to interpret new data sources
  • Increase frequency and broaden scope of management reporting.

Let’s face it… financial data in and of itself is not necessarily “big data” however, mixing operational and management data and analyzing financial effects could very well be.

EPM in Finance

  • Develop agile planning process to respond to changing environment.
  • Connect operational planning with financial assumptions
  • Push EPM lower to widen influence with line of business.

EPM on the go

  • Mobile devices to increase to 12.5b by 2020
  • Mobile doubles adoption rates for business analysis.
  • Mobile adoption to grow 10x in 2014.
  • Make processes portable.

Collaboration and Social

  • Pursue the wisdom of the crowd
  • Expand qualitative commentary in management reporting process
  • Partner with CMO to monitor reputation risk and impact on operating assumptions

Average worker collaborates with 10 or more people to accomplish daily tasks.

So perhaps it is time to look at Oracle EPM slightly differently. Generally I see people using the Oracle EPM suite for historic reporting, annual planning, controls and compliance, transaction processing, etc.

Perhaps the new way to look at EPM is as a new business navigation system. Being able to adapt to market events such as new regulations, disruptive competitors, or customer culture. Maybe we can think of EPM as a tool for real-time decision making, predicting the future, continuous improvement.

Sounds good and all… but there will certainly be cultural changes. Shift from using the tool as a report and statement generator and more of a predictive analytic tool. Shift from chasing the seemingly rising number restatements to an anytime virtual close. (Oracle claims they have customers that close 25 times a day)

What does it take? Willingness to change at all levels, steering committees and councils, world-class data governance, senior executive sponsorship, and investment. Hard? Maybe. Mandatory? You bet. Millennial analysts are coming. Lets get started…

HFM under the covers – the new architecture

•March 5, 2015 • Leave a Comment

Probably the biggest change in is with Financial Management. In this release, the entire architecture has been redone and recoded. The biggest driver for this re-architecture was to get HFM onto Linux. Why? You guessed it – so it can be put in Exalytics and eventually into some sort of consolidation cloud offering. I’ll go deeper into the changes, but first… the shockers:

Shocker 1:  “Platform Independent”

For months, Oracle had been touting that HFM would be released “platform independent.” However when it was released, we read the shocker: HFM on Linux is only supported on Exalytics! I think I can join the rest of my brethren in the industry and say this is just downright infuriating. Oracle states that the reason for the Exalytics mandate was because they wanted a known platform to roll out the initial Linux code set. However, Oracle has historically only certified their products to an operating system, and technically there is no reason why you could not install HFM on a commodity server running the same Linux operating system. (As a matter of fact people have). Regardless, doing so would violate the license agreement and not be supported. It is not clear weather Oracle plans to release HFM for commodity Linux in the future; my guess is that they would. One thing is clear however – is certainly not “platform independent”


Shocker 2:  No more copyapp utility does not contain a copyapp utility to migrate an application between environments. Instead, they have included a new artifact in LCM called “HFM application snapshot” that will do this for you. This has been met with a lot of complaints in the community. The traditional copyapp was a simple utility that directly connected a source and target database. By forcing customers to use LCM, now we have to ensure we have plenty of disk space to export the entire HFM application to disk, then copy the export to the target system, then import. You can include data or not. Regardless, this will easily double the amount of time it takes to do HFM migrations. Also, you will need to ensure nobody is using the system during the migration.

Shocker 3: SSL incomplete

Currently, SSL is only supported in the web layer, they have not implemented it in the application layer yet, but should be coming soon.

First Class Citizens

One of the biggest changes with the close suite is the promotion of the modules as “first class citizens” in the workspace WE see the new tax modules and supplemental data manager in there.

workspace2 workspace1

Along with this, we see a general move from clients to the web, additional utilities, embedding OFMA in HFM, better Calc Manager parity with VBScript, and better integration with FDM.

New Web Profile Editor


Selection of Multiple files for load actions (Only using Firefox)

multiple files

Reordering of Tabs by users as a user preference.


The New Architecture

In order to remove the windows dependent components, The basics had to change:

  • Remove DCOM as the internal communication method
  • Remove all IIS components, including ASP and .net Web services
  • Embrace Java as the backbone of the API.
  • Eliminate the configuration settings in the Windows System Registry

So that’s what they did.

  • DCOM communications were replaced with TCP/IP for client/server communication.
  • ADO was replaced by ODBC for database communication
  • Removed IIS components and integrated into Weblogic
  • Combined the Smartview provider with the HFM web application in WLS.
  • Combined the LCM provider with the new JHsxserver
  • Replaced web services plugin to the Java API plugin for LCM.
  • Moved the configuration / settings to the database instead of the Windows system registry and moved all configuration to the web rather than the fat configuration clients. arch arch

As you can see, this architecture has been significantly simplified. All the web applications are in one Weblogic managed server (default port 9091), and all data retrievals go through one engine and one API. This also significantly reduced the code set footprint as well by eliminating 45% of the software files and 88% of the libraries.


The simplification also gave way to significant performance gains. The following outlines the biggest architectural differences that facilitated that.

  • Parallel threads. Historically HFM was maxed at 8 Threads for consolidations. And, to be honest, they did not scale all that well.   If you had an 8 CPU server that was using all 8 threads, you would see the %utilization of the CPUs dwindle as they scaled out. During large consolidations, it was common to see the first CPU at 100% and the last CPU only being used 20%. In, the 8 thread limitation has been removed and the calculation engine can now scale with all available CPUs. Oracle has tested this on servers with over 60 cores and has seen uniform distribution. Of course, remember that this dependent on the design of the application as well. You can only parallelize if the application is written in a way that it is possible. Regardless, these new advantages will significantly change the way consulting organizations architect HFM applications and the HFM servers.
  • Data retrieval: all data retrieval has been centralized into a single query engine. So things like FR, Smartview, and Web all use the same method for getting to HFM data.
  • Store only used currencies. In earlier version, all currencies and associated records were used when impacting regardless if your application used them. This was unnecessary work. In, only currencies used in the application are updated. If the parent has a different currency, it will pick that up at a later time when impact hits it.
  • User interface interactions and responsiveness. They replaced the very chatty webservices with a thrifty optimized engine that transfers objects between C++ server and Java based web tier.
  • Use SmartHeap. Oracle was able to improve memory allocation and reduce thrashing of heap. Instead of using Malloc ()it now uses a heap approach and using a different libraries.
  • Replaced ADO. The new ODBC method provides more efficient database communication.

Oracle’s initial benchmarking of HFM in the  new version is impressive. They have described case studies that claim instances of consolidations times shrinking from 2 hours  to 6 minutes simply by upgrading. Surprisingly, there is not much difference in performance at this time from a similarly sized commodity server and Exaytics at this time. However, that will change as Oracle is working on Exalytics-specific features in upcoming releases.

But what can YOU expect? Of course it depends on application design.  Here are some major factors to consider:


  • If your app has more entities, it is more likely to see significant benefits.
  • If number of records in each entity across accounts and custom dimensions is relatively large, more likely to see tangible improvement.


  • If all base entities are smaller in size (within +- 10%) then you will likely utilize the true parallelism
  • If all parent entities are similar size, you will see further parallelism as you roll up
  • Uniform entity structures tend to scale well with more processors.

Business Rules

  • Business Rules: computationally intensive rules will show improvements
  • Non-uniform rules across entities will not see as much

In general, the less uniform the structure is, the less performance gains you will see.


Tips for


Multiple parents

  • spaghetti hierarchies will make performance unpredictable and hinder parallel processing. Because overlapping hierarchies compete for same locks and are queued.
  • Try to go with single parent entities.

Usage and infrastructure

  • overlapping concurrent Consolidations hinder parallelism
    • two users consolidating overlapping hierarchies will compete for locks
  • Always use dedicated consolidation servers
  • Better to have larger servers with more CPUS than multiple smaller application servers to take advantage of parallelism.

Uniform structure

  • For the best results, try for equal number of children in each parent



Performance Settings

As part of the restructure, Oracle got rid of the client based configuration utilities and moved them into the web.

Under Consolidation aAdministration, you will see messages.



Selecting settings will give you the options to change performance settings. Settings can be changed for all applications, or specifically per application. So you can add more resources to one application than another…


They also included a nice feature to take a note to document the reasoning for the change:


Most settings will require a restart of the HFM services to go into effect.

There are a total of 29 settings that can be used to tune or choose desired behavior. Here is a small list of some of the most important tuning parameters:

  • MaxNumDataRecordingsInRam.   Number of records in the data cache
  • MaxDataCacheSizeinMB.   Maximum memort allocated for the data cache.
  • MaxNumCubesInRam. Maximum number of cubes in RAM
  • NumCosolidationThreads.   Number of threads that can be used by consolidation.
  • NumThreadsToUseWhenUpdatingCalcStatusSystemWasChanged. The number of threads to use when updating calc status… should be a function of the number of CPUs on the system.



Logging and services

As part of the changes, Oracle got HFM in line with the rest of the products from a logging perspective. There is no error log viewer anymore and logs are now sent to a typical text-based file that can be controlled with Oracle Diagnostic Logging (ODL). They have the same typical fields, including the very helpful ECID concept. Logs are separated by application as well. System wide messages continue to function in the messages log viewer in the web.


Processes and services

The following processes have been removed:

  • HsvDatasource.exe
  • HsxServer.exe
  • HFMService.exe
  • DMEListener.exe

Instead, a new process called XfmDatasource.exe (yes .exe even on Linux) has been created to manage the datasource process. Services

old services Services

new services


What’s next:

Rumors for the .100 PSU

  • Exaytics only features: (not just performance) – Make HFM more aware and self tuning.
    • Insights – who did what. What are people doing, top users of system, graphical analysis.
    • Rule Profiling
  • Additional HFM utilities
  • Excel Based Journals Workbench
  • UI Enhancements
  • Application creation Wizard
  • More web based utilities
  • Ability to add years

Roadmap and Beyond

  • web based member list editor
  • Auto Archive of data audit
  • Mobile interface
  • Multiple databases
  • Audit Extract
  • HFM EA Copy Template
  • HFM Index Update Utility

New Web Based Cumulative Feature Overview

•March 4, 2015 • Leave a Comment

Want to know the new features from the version you are on to the latest and greatest?  We have had a spreadsheet version of this before, but check out the new Web based Cumulative Feature tool.  A quick and easy way for you to check new features, by version, by product

The tool is available at:


In this example. I selected Data Relationship Manager differences from to


The Result was a 65 line list of new features, broken out by version:


You can even export the results to Excel:


Supported upgrade paths to

•March 4, 2015 • Leave a Comment

From 11.1.2.x:

In general, you can apply the maintenance release to upgrade to  However, If you are on HFM, you must upgrade to at least, .2, or .3 first.  If you are using FCM, the maintenance release is only supported from version and


Upgrade to first, then upgrade to

From to

Apply maintenance release to, then upgrade to, then upgrade to

EPM Conference alert: Klondike 2015, Ohio Feb 25th-27th

•February 9, 2015 • Leave a Comment


Check out the Klondike conference… one of the only conferences dedicated to Hyperion. This is a mult-day information packed event for a fairly cheap price ($220).

The conference will start with a Wednesday night customer showcase event (free to attend).  Thursday will be dedicated to Hyperion-only based session breakouts conducted by industry leaders.

Friday will be a morning panel session dedicated to upgrades.

I’ll be discussing What’s new in and showcasing examples of modern IT approaches to meet today’s performance and availability requirements.

Date:  Feb 25th – 27th

Location: Sawmill Creek Resort, Huron/Sandusky, OH

For more information go to the Klondike 2015 Official Web Site:

New Podcast: #oracleEPM Upgrades.

•February 6, 2015 • Leave a Comment


Check out the latest ArchBeat EPM Podcast.   All about Oracle EPM upgrades with my colleges John Booth and Rob Donahue



Get every new post delivered to your Inbox.

Join 313 other followers