Implementing HR Analytics using Universal Adaptors Oracle
Corporation |1 Implementing HR Analytics using Universal Adaptors
-A technical documentation of various aspects of the product as
applies to Oracle Business Intelligence Applications HR Universal
Adaptors Implementing HR Analytics using Universal Adaptors Oracle
Corporation |2 1.IMPLEMENTING HR ANALYTICS UNIVERSAL ADAPTORS
............................................. 4 2.GENERAL
BACKGROUND OF UNIVERSAL ADAPTORS
....................................................... 5 3.GENERAL
IMPLEMENTATION CONSIDERATIONS
............................................................... 6
4.KNOW YOUR STEPS TOWARDS A SUCCESSFUL HR ANALYTICS IMPLEMENTATION
FOR UNIVERSAL ADAPTOR
................................................................................................................
8 5.IMPACT OF INCORRECT CONFIGURATIONS OF DOMAIN VALUES
.............................. 10 6.IMPACT OF INCORRECT POPULATION
OF CODE-NAME COLUMNS ............................ 12 7.BEST
PRACTICES FOR EXTRACTING INCREMENTAL CHANGES
................................... 13 8.DETAILED UNDERSTANDING OF
THE KEY HR ETL PROCESSES .................................. 15
8.1.Core Workforce Fact Process
...................................................................................................................
15 8.1.1.ETL Flow
.................................................................................................................................................
15 8.1.2.Workforce Fact Staging (W_WRKFC_EVT_FS)
.......................................................................................
18 8.1.3.Workforce Base Fact (W_WRKFC_EVT_F)
.............................................................................................
20 8.1.4.Workforce Age Fact (W_WRKFC_EVT_AGE_F)
......................................................................................
21 8.1.5.Workforce Period of Work Fact (W_WRKFC_EVT_POW_F)
..................................................................
22 8.1.6.Workforce Merge Fact (W_WRKFC_EVT_MERGE_F)
.............................................................................
23 8.1.7.Workforce Month Snapshot Fact (W_WRKFC_EVT_MONTH_F)
........................................................... 24
8.1.8.Workforce Aggregate Fact (W_WRKFC_BAL_A)
....................................................................................
25 8.1.9.Workforce Aggregate Event Fact (W_WRKFC_EVT_A)
..........................................................................
27 8.1.10.Handling Deletes
....................................................................................................................................
29 8.1.11.Propagating to derived facts
..................................................................................................................
30 8.1.12.Date-tracked Deletes
.............................................................................................................................
30 8.1.13.Purges
....................................................................................................................................................
30 8.1.14.Primary Extract
......................................................................................................................................
31 8.1.15.Identify Delete
.......................................................................................................................................
31 8.1.16.Soft Delete
.............................................................................................................................................
32 8.1.17.Date-Tracked Deletes - Worked Example
..............................................................................................
33 8.2.Recruitment Fact Process
........................................................................................................................
34 8.2.1.ETL Flow
.................................................................................................................................................
34 8.2.2.Job Requisition Event & Application Event Facts
(W_JOB_RQSTN_EVENT_F & W_APPL_EVENT_F) .... 36 8.2.3.Job
Requisition Accumulated Snapshot Fact (W_JOB_RQSTN_ACC_SNP_F)
........................................ 37 8.2.4.Applicant
Accumulated Snapshot Fact (W_APPL_ACC_SNP_F)
............................................................. 38
8.2.5.Recruitment Pipeline Event Fact (W_RCRTMNT_EVENT_F)
..................................................................
39 8.2.6.Recruitment Job Requisition Aggregate Fact
(W_RCRTMNT_RQSTN_A) ..............................................
42 Implementing HR Analytics using Universal Adaptors Oracle
Corporation |3 8.2.7.Recruitment Applicant Aggregate Fact
(W_RCRTMNT_APPL_A)...........................................................
44 8.2.8.Recruitment Hire Aggregate Fact (W_RCRTMNT_HIRE_A)
....................................................................
45 8.3.Absence Fact Process
..............................................................................................................................
47 8.3.1.ETL Flow
.................................................................................................................................................
47 8.3.2.Absence Event Fact (W_ABSENCE_EVENT_F)
........................................................................................
49 8.4.Learning Fact Process
..............................................................................................................................
52 8.4.1.ETL Flow
.................................................................................................................................................
52 8.4.2.Learning Enrollment Accumulated Snapshot Fact
(W_LM_ENROLLMENT_ACC_SNP_F) ...................... 54
8.4.3.Learning Enrollment Event Fact (W_LM_ENROLLMENT_EVENT_F)
...................................................... 55
8.5.Payroll Fact Process
.................................................................................................................................
56 8.5.1.ETL Flow
.................................................................................................................................................
56 8.5.2.Payroll Fact (W_PAYROLL_F)
.................................................................................................................
58 8.5.3.Payroll Aggregate Fact (W_PAYROLL_A)
................................................................................................
59 9.KNOWN ISSUES AND PATCHES
...............................................................................................
60 Implementing HR Analytics using Universal Adaptors Oracle
Corporation |4 1. Implementing HR Analytics Universal Adaptors The
purpose of this document is to provide enough information one might
need while attempting an implementation of HR Analytics using the
Oracle BI Applications Universal Adaptors. There are several myths
around what needs to be done while implementing Universal Adaptors,
where can things go wrong if not configured correctly, what columns
are to be populated as a must, how to provide delta data set while
shooting for incremental ETL runs and so on. All of these topics
are discussed in this document.Apart from understanding the entry
points that are required to implement HR Analytics, it also helps
to know the process details of some key components of HR Analytics.
A few of these key facts and dimensions are also discussed and an
overview of their process/usages is provided towards the end.
Knowing the list of files/tables to be populated (entry points),
the grain of such tables, the kind of data that they expect, has
always been a key pain for the implementers. This is also discussed
vividly and an excel spreadsheet (HR_Analytics_UA_Lineage.xlsx) is
provided along with this document that addresses all such
needs.This document is intended for Oracle BI Applications Releases
7.9.6, 7.9.6.1, 7.9.6.2 as well as 7.9.6.3. For upcoming releases,
this document will be updated in due course of time. Implementing
HR Analytics using Universal Adaptors Oracle Corporation |5 2.
General Background of Universal Adaptors Oracle BI Applications
provide packaged ETL mappings against source OLTP systems like
Oracle E-Business Suites, PeopleSoft, JD Edwards and Siebel across
various business areas such as Human Resources, Supply Chain &
Procurements, Order Management, Financials, Service and so
on.However, Oracle BI Applications does acknowledge that there can
be quite a few other source systems, including home-grown ones,
typically used by SMB customers. And to that extent, some of the
enterprise customers may also be using SAP as their source. Until
it gets to a point where Oracle BI Applications can deliver
pre-built ETL adaptors against each of these source systems, the
Universal Adaptor becomes a viable choice.A mixed OLTP system where
one of them has pre-built adaptor for and the other doesnt is also
a scenario that calls for the usage of Universal Adaptors. For
instance, the core portion of Human Resources may be in PeopleSoft
systems, but the Talent Management portion may be maintained in a
3rd party system like Taleo.In order for customers to enable
pulling in data from non-supported source systems into the Data
Warehouse, Oracle BI Applications have created a so called
Universal Adaptor. The reason this was doable in the first place
was the fact that the ETL architecture of Oracle BI Applications
had the evident support for this. Oracle BI Applications Data
Warehouse consists of a huge set of facts, dimensions and aggregate
tables. The portion of the ETL that loads to these end tables are
typically Source Independent (loaded using the Informatica folder
SILOS). These ETL maps start from a staging table and load data
incrementally into the corresponding end table. Aggregates are
created upstream, and have no relation to which source system the
data came from. The ETL portion, Source Dependent Extract, that
extracts into these staging tables (also called Universal Stage
Tables) are the ones that go against a given source system, like
EBS or PSFT and so on. For Universal, they go against a similarly
structured CSV file. Take any Adaptor the universal stage tables
are exactly the same structurally. The grain expectation is also
exactly the same for all adaptors. And no wonder, while all these
conditions are met, the SILOS part will load the data (extracted
from Universal) from the universal stage tables seamlessly. Why did
Oracle BI Applications decide to source from CSV files? In short,
the answer to this is to complete the end-to-end
extract-transform-load story. We will cover this in a bit more
details and what the options are, in the next section. Implementing
HR Analytics using Universal Adaptors Oracle Corporation |6 3.
General Implementation Considerations One myth that implementers
have while implementing Universal Adaptors is that: Data for the
universal staging tables should always be presented to Oracle BI
Applications in the required CSV file format. If your source data
is already present in a relational database, why dump it to CSV
files and give it to Oracle BI Applications? You will anyway have
to write brand new ETL mappings that read from those relational
tables to get to the right grain and right columns. Then why target
those to CSV files and then use the Oracle BI Applications
Universal Adaptor to read from them and write to the universal
staging tables? Why not directly target those custom ETL maps to
the universal staging tables? In fact, when your source data is in
relational tables, this is the preferred approach. However, if your
source data comes from 3rd party sources which you have outsourced,
and probably have agreements with them to send you data
files/reports once in a while, and if that 3rd party source doesnt
allow you to access their relational schema, then probably CSV
files is the only alternative. A typical example would be Payroll
data. A lot of organizations typically outsource their Payroll to
3rd party companies like ADP systems and so on. In those cases, ask
for the data in the same manner that you expect in the Oracle BI
Applications CSV files.Also, if your source data lies in IBM
mainframe systems, where it is typically easier to write COBOL
programs or whatever to extract the data in flat files, presenting
CSV files to Oracle BI Applications Universal Adaptor is probably
easier. Irrespective of how to populate the universal staging
tables (relational sources or CSV sources) five very important
points should always be kept in mind: Grain of the universal
staging tables are met properly. The uniqueness of records do
exists in the (typically) INTEGRATION_ID columns. The mandatory
columns are populated the way they should be. OLTP (Relational DB)
Data Warehouse CSV FilesCustom Extract to files Universal Adaptor
to staging tables Custom Extract to staging tables Implementing HR
Analytics using Universal Adaptors Oracle Corporation |7 The
relational constraints are met well while populating data for
facts. In other words, the natural keys that you provide in the
fact universal staging table, must exist in the dimensions. This is
with respect to the FK resolution (dimension keys into the end fact
table) topic. Incremental extraction policy is well set up. Some
overlap of data is OK, but populating the entire dataset to the
universal staging tables will prove to be non-performing. Note: For
the rest of the document, we will assume that you are going the CSV
file approach, although re-iterating, it is recommended that if
your source data is stored in a relational database you should
write your own extract mappings. Implementing HR Analytics using
Universal Adaptors Oracle Corporation |8 4. Know your steps towards
a successful HR Analytics implementation for Universal Adaptor
There are several entry points while implementing HR Analytics
using Universal Adaptors. The base dimension tables and base fact
tables have their corresponding CSV files where you should
configure the data at the right grain and expectations. Other kinds
of tables include Exchange Rate and Codes. Exchange Rate
(W_EXCH_RATE_G) has its own single CSV file, whereas the Codes
table (W_CODE_D) has a CSV file, one per each code category. To get
to see all code-names well enough in the dashboards/reports, you
should configure all the required code CSV files *the list of such
files are provided in the associated spreadsheet
(HR_Analytics_UA_Lineage.xlsx)]. Key points: Start with populating
the core HR dimension CSV files, like Employee, Job, Pay Grade, HR
Position, Employment etc. Then configure subject area specific HR
dimensions, like Job Requisitions, Recruitment Source etc (when
implementing Recruitment) or Learning Grade, Course (when
implementing Learning) or Pay Type etc (when implementing Payroll)
or Absence Event, Absence Type Reason (when implementing Absence)
and so on. Two important HR events dimensions should go next. These
are Workforce Event Type (mandatory for all implementations) and
Recruitment Event Type (if implementing Recruitment). These tables
can decide the overall fate of the success of implementation.
Common HR Dimensions Fact Specific HR Dimensions HR Event
Dimensions Common Class Dimensions Other Common Dimensions Code
Dimensions Workforce Fact Other Facts Domain CSV Files Implementing
HR Analytics using Universal Adaptors Oracle Corporation |9
Identifying events from your source system and stamping them with
Oracle Business Intelligence known domain values are they key
aspects here. Then configure related COMMON class dimensions
applicable for all, like Internal Organizations (logical/applicable
partitions being Department, Company Organization, Job Requisition
Organization, Business Unit etc), or Business Locations
(logical/applicable partitions being Employee Locations) etc.
Consider other shared dimensions and helper dimensions like Status
(logical/applicable partition being Recruitment Status), Exchange
Rate etc. Then consider the code dimensions. By this time you are
already aware of what all dimensions you are considering to
implement, and hence, can save time by configuring the CSVs for
only the corresponding code categories. For facts, start with
configuring the Workforce Event fact. Since the dimensions are
already configured, the natural key of the dimensions are already
known to you and hence should be easy to configure them in the
fact.Workforce fact should be followed by subject area specific HR
facts, like Payroll, Job Requisition Event, Applicant Event,
Learning Enrollment, and Learning Enrollment Accumulated Snapshot.
Note that Absence fact does not need a CSV file to be populated. It
is populated off of the Absence Event dimension, the Workforce
Event fact and the time dimension as a Post Load process. Now that
all the CSV files for facts, dimensions, and helper tables are
populated, you should move your focus towards the domain value side
of things. For E-Business Suite & PeopleSoft Adaptors, we do
mid-stream lookups against preconfigured lookup CSV files. The map
between source values/codes to their corresponding domain
values/codes come pre-configured in these lookup files. However,
for Universal Adaptor, no such lookup files exist. This is because
of the fact that we expect that the accurate domain values/codes
will be configured along-with configuring the base dimension tables
where they apply. Since everything is from a CSV file, there is no
need to have the overhead of an additional lookup file acting in
the middle. Domain value columns begin with W_ *excluding the
system columns like W_INSERT_DT and W_UPDATE_DT] and normally they
are mandatory, cannot be nulls, and the value-set cannot be changed
or extended. We do relax the extension part on a case by case
basis, but in no way, the values can be changed. The recommendation
at this stage is that you go to the DMR guide (Data Model Reference
Guide), get the list of table-wise domain values, understand the
relationships clearly in cases there exists any hierarchical or
orthogonal relations, identify the tables where they apply and then
their corresponding CSV files, look at the source data and
configure the domain values in the same CSV files. Note that if
your source data is in a relational database and you have chosen to
go the alternate route of creating all extract mappings by
yourself, the recommendation is to follow what we have done for
E-Business Suite Adaptors and PeopleSoft Adaptors and create
separate domain value lookup CSV files, and do a mid-stream lookup.
Last, but not the least, configure the parameters in DAC (Data
warehouse Administration Console). Read up the documentation for
these parameters, understand their expectations, study your own
business requirements and then set the values accordingly.
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |10 5. Impact of incorrect configurations of domain
values Domain values constitute a very important foundation for
Oracle Business Intelligence Applications. We use this concept
heavily all across the board to equalize similar aspects from a
variety of source systems. The Oracle Business Intelligence
Applications provide packaged data warehouse solutions for various
source systems such as E-Business Suite, PeopleSoft, Siebel, JD
Edwards and so on. We attempt to provide a source dependent extract
type of a mapping that leads to a source independent load type of a
mapping, followed by a post load (also source independent) type of
mapping. With data possibly coming in from a variety of source
systems, this equalization is necessary. Moreover, the reporting
metadata (OBIEE RPD) is also source independent. The metric
calculations are obviously source independent.The following diagram
shows how a worker status code/value is mapped onto a warehouse
domain to conform to a single target set of values. The domain is
then re-used by any measures that are based on worker status.
Domain values help us to equalize similar aspects or attributes as
they come from different source systems. We use these values in our
ETL logic, sometimes even as hard-coded filters. We use these
values in defining our reporting layer metrics. And hence, not
configuring, incorrectly configuring, or changing the values of
these domain value columns from what we expect, will lead to
unpredictable OLTP 2 Data Warehouse OLTP 1 A I Active Inactive 2 3
Suspended Terminated 1Active A I Active Inactive 2 3 Suspended
Terminated 1Active ACTIVE INACTIVE INACTIVE INACTIVE ACTIVE
SourceDomain Active Measures Implementing HR Analytics using
Universal Adaptors Oracle Corporation |11 results. You may have a
single source system to implement, but still you have to go through
all the steps and configure the domain values based on your source
data. Unfortunately, this is small price you pay for going the buy
approach VS the traditional build approach for your data warehouse.
One of the very frequently asked question is what is the difference
between domain value code/name pairs VS the regular code/name pairs
that are stored in W_CODE_D. If you look at the structure of
W_CODE_D table, it appears to be also capable of standardizing
code/name pairs to something common. This is correct. However, we
wanted to give an extensive freedom to users to be able to do that
standardization (not necessarily equalization) of their code/names
and possibly use that for cleansing as well. For example, if the
source supplied code/name are possibly CA/CALIF or CA/California,
you can choose the W_CODE_D approach (using Master Code and Master
Map tables see configuration guide for details) to standardize on
CA/CALIFORNIA.Now, to explain the difference of domain value
code/name pairs Vs the regular code/name pairs, it is enough if you
understand the significance of the domain value concept. To keep it
simple, wherever we (Oracle Business Intelligence Applications)
felt that we should equalize two similar topics that give us
analytic values, metric calculation possibilities etc, we have
promoted a regular code/name pair to a domain value code/name
pair.If we have a requirement to provide a metric called Male
Headcount, we cant do that accurately unless we know which of the
headcount is Male and which is Female. This metric therefore has
easy calculation logic: Sum of headcount where sex = Male. Since
PeopleSoft can call it M and EBS can have male, we decided to call
it a domain value code/name pair, W_SEX_MF_CODE (available in the
employee dimension table). Needless to say, if you didnt configure
your domain value for this column accurately, you wont get this
metric right. Implementing HR Analytics using Universal Adaptors
Oracle Corporation |12 6. Impact of incorrect population of
CODE-NAME columns The Oracle BI Applications mostly use Name and
Description columns in the out-of-the-box dashboards and reports.
We use Codes only during calculations, wherever required.
Therefore, it is obvious that if the names and descriptions didnt
resolve against their codes during the ETL, you will see blank
values of attributes (or in some cases, depending on the DAC
parameter setting, you might see strings like or and so on).Another
point to keep in mind is that all codes should have distinct name
values. If two or more codes have the same name value, at the
report level you will see them merged. The metric values may
sometimes appear in different lines of the report, because OBIEE
Server typically throws in a GROUP BY clause on the lowest
attribute (code).Once implemented, you are free to promote the code
columns from the logical layer to the presentation layer. You might
do this when you know your business users are more acquainted to
the code values rather than the name values. But that is a separate
business decision. The general behavior is not like that.
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |13 7. Best practices for extracting incremental
changes Although you can choose to supply the entire dataset during
incremental runs, for all practical reasons, this is not
recommended. Firstly because then the ETL has to process all the
records and determine what needs to be applied and what can be
rejected. Secondly, the decision ETL takes may not be accurate. ETL
decisions are based on the values of the system date columns like
CHANGED_ON_DT, AUX1_CHANGED_ON_DT, AUX2_CHANGED_ON_DT,
AUX3_CHANGED_ON_DT and AUX4_CHANGED_ON_DT columns only. We do not
explicitly compare column-by-column and determine whether an update
is required. We believe that if something has changed, probably one
of the four date columns must have changed. And in that case, we
simply update. If all 5 date columns are same, we pretty much tend
to reject. The base of this decision is the correctness of the date
columns. If your source system does not track the last updated date
column on a record well enough, it becomes your responsibility to
force and update, no matter what. An easy way to do this is to set
SESSSTARTTIME in one of these columns during extract. This will
force to detect a change, and we end up updating. No wonder, this
is not be the best idea. By all means, you should provide the true
delta data set during every incremental run. A small amount of
overlap is acceptable, especially when you deal with flat files.
Our generally accepted rules for facts or large dimensions are
either: Customer does their own version of persisted staging so
they can determine changes at the earliest opportunity and only
load changes into universal staging tables If absolutely impossible
to determine the delta or to go the persistent staging route,
Customer only does full load. Otherwise doing a full extract every
time and processing incrementally will take longer. Follow the
principles below to decide on your incremental strategy: (Applies
to relational table sources) Does your source system capture last
update date/time accurately in the source record that change? If
so, extracting based on this column would be the best idea. Now,
your extract mapping may have used 2 or 3 different source tables.
Decide which one is primary and which ones are secondary. The last
update date on the primary table goes to the CHANGED_ON_DT column
in the stage table. The same from the other two tables go to one of
the auxiliary changed on date column in the stage table. If you
design your extract mapping this way, you are almost done. Just
make sure you add the filter criteria
primary_table.last_update_date >= $$LAST_EXTRACT_DATE. The value
of this parameter is maintained in DAC. (Applies to CSV file
sources) Assuming that there is a mechanism that you can trust that
gives you the delta file during each incremental load, does the
delta file it come with a changed value of system dates? If yes,
youre OK. But if not, then you should add an extra piece of logic
in the out of the box SDE_Universal** mappings that set
SESSSTARTTIME to one of the system date columns. This will force an
update (when possible) no matter what. Implementing HR Analytics
using Universal Adaptors Oracle Corporation |14 (Applies to CSV
file sources) If there are no mechanisms to easily give your delta
file during incremental, and it seems easier to get a complete dump
every time, you have actually couple of choices: a.Pass the whole
file in every load, but run true incremental loads. Note that this
is not an option for large dimensions or facts. b.Pass the whole
file each time and run full load always. c.Do something at the
back-office to process the files and produce the delta file
yourselves. The choices (a) and (b) may sound a bad idea, but weve
seen it to be a worthwhile solution compared to (c), if the source
data volume is not very high. For an HR Analytics implementation,
this could be OK as long as your employee strength is no more than
5000 and you have no more than 5 years of data. The choice (c) is
more involved but produces best results. The idea is simple. You
store the last image of the full file that your source gave you
*call A+. You get your new full file today *call B+. Compare A
& B. There are quite a few data-diff software available in the
market, or better if you could write a Perl or python script on
your own. The result of this script should be a new delta file
*call C+ that has the lines copied from B that has changed as
compared to A. Use this new file C as your delta data for your
incremental runs. Also, discard A and rename B as A, thereby
getting ready for the next batch. Having said that, it is
worthwhile to re-iterate that the Persisted Staging is a better
approach as it is simpler and uses Informatica to do the
comparison. Oracle BI Applications have used this technique in HR
adaptors for E-Business Suite and PeopleSoft, in case you wanted to
refer to them. If there are other options not considered here, by
all means, try them out. This list is not comprehensive, it is
rather indicative. Implementing HR Analytics using Universal
Adaptors Oracle Corporation |15 8. Detailed understanding of the
key HR ETL Processes 8.1.Core Workforce Fact Process 8.1.1.ETL Flow
WorkforceFactW_ WRKFC_EVT_FAgeBandDimensionW_AGE_
BAND_DPeriodofWorkBandDimensionW_PRD_OF_WRK_BAND_DWorkforceAgeFactW_
WRKFC_EVT_AGE_FWorkforceServiceFactW_
WRKFC_EVT_POW_FWorkforceMergeFactW_ WRKFC_EVT_
MERGE_FMonthDimensionW_ MONTH_DWorkforceMonthSnapshotFactW_
WRKFC_EVT_ MONTH_FWorkforceFactStagingW_
WRKFC_EVT_FSWorkforceBalanceAggregateW_WRKFC_BAL_AWorkforceEventAggregateW_WRKFC_EVT_ADimensionAggregateW_EMPLOYMENT_STAT_CAT_DDimensionAggregateW_WRKFC_EVENT_GROUP_DWorkforceEventFactCSVfileinput
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |16 Terminology Assignment is used to refer to an
instance of a person in a job. It should not be an update-able key
on the source transaction system. Key Steps and Table Descriptions
TablePrimary SourcesGrainDescription W_WRKFC_EVT_FSFlat file Source
adaptors One row per assignment per workforceeventRecords workforce
events for assignments from hire/start through to termination/end.
Includes appraisals, salary reviews and general changes.
W_WRKFC_EVT_FW_WRKFC_EVT_F W_WRKFC_EVT_AGE_FW_AGE_BAND_D
W_WRKFC_EVT_F One row per assignment per age band change Records
age band change events for each assignment
W_WRKFC_EVT_POW_FW_PRD_OF_WRK_BAND_D W_WRKFC_EVT_F One row per
assignment per service band change Records service band change
events for each assignment W_WRKFC_EVT_MERGE_FW_WRKFC_EVT_F
W_WRKFC_EVT_AGE_F W_WRKFC_EVT_POW_F One row per assignment per
workforce or band change event Merges band change events with
workforce events for assignments
W_WRKFC_EVT_MONTH_FW_WRKFC_EVT_MERGE_F W_MONTH_D One row per
assignment per change event or snapshot month Adds in monthly
snapshot records along with the workforce and band change events.
W_WRKFC_EVT_EQ_TMPW_WRKFC_EVT_FSOne row per changed assignment
Reference table for which assignments have changed and the earliest
change dates W_WRKFC_EVT_MONTH_EQ_TMP W_WRKFC_EVT_EQ_TMP
W_WRKFC_EVT_F W_MONTH_D One row per changed assignment Expands
W_WRKFC_EVT_EQ_TMP to include assignments needing new snapshots
W_WRKFC_BAL_A_EQ_TMPW_WRKFC_EVT_MONTH_EQ_TMP W_WRKFC_EVT_MONTH_F
W_EMPLOYMENT_STAT_CAT_DOne row per changed employment
status/category and snapshot month Reference table for which
employment status/category have changed and the snapshot month
W_WRKFC_EVT_A_EQ_TMPW_WRKFC_EVT_EQ_TMP W_WRKFC_EVT_MERGE_F
W_EMPLOYMENT_STAT_CAT_DW_WRKFC_EVENT_GROUP_D One row per
changedevent group/sub group and employment status/category
Reference table for which event group/sub groups have changed and
changed employment status/category. W_EMPLOYMENT_STAT_CAT
W_EMPLOYMENT_DOne row perThis table is an aggregated Implementing
HR Analytics using Universal Adaptors Oracle Corporation |17
_DEmployment Status and category. dimension table on the distinct
Employment Status and Category available in W_EMPLOYMENT_D table.
W_WRKFC_EVENT_TYPE_DW_WRKFC_EVENT_TYPE_DSThe grain of this table is
at a single Workforce Event Type level.This dimension table stores
information about a workforce event, such as the action, whether
the organization or job has changed, whether it is a promotion or a
transfer, and so on. This table is designed to be a Type-1
dimension. W_WRKFC_EVENT_GROUP_D W_WRKFC_EVENT_TYPE_DOne row per
Event group and Event Sub group This table is an aggregate
dimension based on the Event Group and Event Sub Group in the
W_WRKFC_EVENT_TYPE_D dimension table.
W_WRKFC_BAL_AW_EMPLOYMENT_STAT_CAT_D W_WRKFC_EVT_MONTH_F One row
per employment status/category and snapshot month This is a Balance
Aggregate table based on the Snapshot Fact table
W_WRKFC_EVT_MONTH_F.
W_WRKFC_EVT_AW_EMPLOYMENT_STAT_CAT_DW_WRKFC_EVENT_GROUP_D
W_WRKFC_EVT_MERGE_F One row per workforceevent group/event sub
group and employment status/category This is an Events Aggregate
table based on the Base Event Fact table W_WRKFC_EVT_MERGE_F. Key
Setup/Configuration Steps The following table documents the minimum
setup required for the target snapshot fact to be loaded
successfully. For other functionality to work it is necessary to
perform other setup as documented by the installation guide below.
If this is not done it may be necessary to re-run initial load
after completing the additional setup. Implementing HR Analytics
using Universal Adaptors Oracle Corporation |18 TypeNameDescription
DAC System Parameter INITIAL_EXTRACT_DATEEarliest date to extract
data across all facts HR_WRKFC_EXTRACT_DATEEarliest date to extract
data from HR Facts HR_WRKFC_SNAPSHOT _DTEarliest date to generate
snapshots for HR Workforce. This should be set to the 1st of a
month. HR_WRKFC_SNAPSHOT _TO_WIDCurrent date in WID form should not
be changed DomainsAge BandAge bands need to be defined in a
continuous set of ranges Period of Work BandsPeriod of work bands
need to be defined in a continuous set of ranges Notes 1)Workforce
extract date should be the earliest date from which HR data is
required for reporting (including all HR facts e.g. Absences,
Payroll, Recruitment). This can be later than initial extract date
if other non-HR content loads need an earlier initial extract date.
2)Snapshots should be generated for recent years only in order to
improve ETL performance and reduce the size of the snapshot fact.
8.1.2.Workforce Fact Staging (W_WRKFC_EVT_FS) When loading the
workforce fact staging table from flat file the following
information about the rows and columns to populate should be taken
into account. Workforce Fact Incremental Load No unnecessary
changes should be staged If there have been no change events for an
assignment (instance of a worker in a job) then the fact staging
table should not contain any rows for that assignment. The staging
table is used as a reference for what data has actually changed and
needs to be refreshed in the downstream facts. Adding unnecessary
changes will slow down the performance of the incremental refresh.
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |19 Staging back-dated changes A back-dated change is a
change to historical (i.e. not current) records.This includes
correcting the date of birth or period of work start dates, changes
which require the respective band change events to be recalculated.
When processing back-dated changes it is quite likely that one
change will affect many later records. For example, a correction to
the last appraisal rating (or salary) will affect every event since
the last appraisal (or salary review). Changes to dimensions may
affect the next records change indicators / previous dimension key
values, and also may need to be carried over to other types of
events sourced from elsewhere. To correctly process back-dated
changes the workforce fact staging table must be populated with the
changed records and all subsequent events for the changed
assignment. This can be done in more than one way, for example:
Always extract changes and subsequent events from the source
transaction system Keep a record in the staging area of each change
event type that has previously been processed in the warehouse
(this method is implemented in Oracle and PeopleSoft adaptors)
Extract only the changes into the workforce fact staging table, and
then use the data already loaded in the workforce fact to fill in
the subsequent change events Workforce Fact Essential Columns There
are certain columns that are essential to the ETL working and must
be loaded into the fact staging table (W_WRKFC_EVT_FS) as a
minimum.However in order to get a useful system after running the
ETL it is necessary to populate some measures and dimension keys as
well. All of these details are provided in the associated excel
spreadsheet (HR_Analytics_UA_Lineage.xlsx). Implementing HR
Analytics using Universal Adaptors Oracle Corporation |20
8.1.3.Workforce Base Fact (W_WRKFC_EVT_F) Calculates
EFFECTIVE_END_DATE based on next event
dateW_WRKFC_EVT_FW_WRKFC_EVT_FSW_WRKFC_EVT_EQ_TMPIncremental only
Deletes any obsolete fact records Maintains effective start/end
datesIncremental only The workforce base fact is refreshed from the
workforce fact staging table. Effective end date is calculated
based on the next event date Deleted events are removed (subsequent
events previously loaded but no longer staged see Incremental Load
section on backdated changes) Initial Load Sessions
SIL_WorkforceEventFact (loads new/updated records) Incremental Load
Sessions SIL_WorkforceEventFact (loads new/updated records)
PLP_WorkforceEventQueue_Asg (loads change queue table with changed
(staged) assignments and their earliest change (staged event) date)
PLP_WorkforceEventFact_Mntn (deletes obsolete events see note
above) The date-track (having a continuous non-overlapping set of
effective start/end dates per assignment) is critical to downstream
facts and the correct operation of the reports. Deletes can also be
handled separately (see deletes section below) but care needs to be
taken to ensure the date-track is correctly maintained if deleting
individual records. Implementing HR Analytics using Universal
Adaptors Oracle Corporation |21 8.1.4.Workforce Age Fact
(W_WRKFC_EVT_AGE_F) W_WRKFC_EVT_AGE_FW_WRKFC_EVT_F W_AGE_BAND_D The
age fact contains one starting row plus one row each time an
assignment moves from one age band to the next. For example, if the
last age band is 55+ years then there will be an event generated
for each assignment on the 55th birthday of the worker (BIRTH_DT +
55 years). Any worker hired beyond the age of 55 will have no
additional band change events, just the starting row. Note the age
bands are completely configurable, but because of the dependencies
between the age bands and the facts any changes to the
configuration will require a reload (initial load). This fact is
refreshed for an assignment whenever there is a change to the
workers date of birth on the hire record (or the first record if
the hire occurred before the fact initial extract date). Initial
Load Sessions PLP_WorkforceEventFact_Age_Full (loads new records)
Incremental Load Sessions PLP_WorkforceEventFact_Age_Mntn (deletes
records to be refreshed or obsolete) PLP_WorkforceEventFact_Age
(loads changed records) Implementing HR Analytics using Universal
Adaptors Oracle Corporation |22 8.1.5.Workforce Period of Work Fact
(W_WRKFC_EVT_POW_F) W_WRKFC_EVT_POW_FW_WRKFC_EVT_F
W_PRD_OF_WRK_BAND_D The period of work fact contains one starting
row plus one row each time an assignment moves from one service
band to the next. For example, if the first service band is 0-1
years then there will be an event generated for each assignment
exactly one year after hire (POW_START_DT). Note the period of work
bands are completely configurable, but because of the dependencies
between the service bands and the facts any changes to the
configuration will require a reload (initial load). This fact is
refreshed whenever there is a change to the hire record (or first
record if the hire was before the fact initial extract date).
Initial Load Sessions PLP_WorkforceEventFact_Pow_Full (loads new
records) Incremental Load Sessions PLP_WorkforceEventFact_Pow_Mntn
(deletes records to be refreshed) PLP_WorkforceEventFact_Pow (loads
changed records) Implementing HR Analytics using Universal Adaptors
Oracle Corporation |23 8.1.6.Workforce Merge Fact
(W_WRKFC_EVT_MERGE_F) W_WRKFC_EVT_MERGE_FW_WRKFC_EVT_AGE_F
W_WRKFC_EVT_F W_WRKFC_EVT_POW_F This fact contains the change
events from the base, age and service facts. It is refreshed based
on the combination of assignments and (earliest) event dates in the
fact staging table. Initial Load Sessions
PLP_WorkforceEventFact_Merge_Full (loads new records) Incremental
Load Sessions PLP_WorkforceEventFact_Merge_Mntn (deletes records to
be refreshed) PLP_WorkforceEventFact_Merge (loads changed records)
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |24 8.1.7.Workforce Month Snapshot Fact
(W_WRKFC_EVT_MONTH_F) W_WRKFC_EVT_MONTH_FW_WRKFC_EVT_MERGE_F
W_MONTH_DWorkforce EventsMonthly Snapshots This fact contains the
merged change events plus a generated snapshot record on the first
of every month on or after the HR_WRKFC_SNAPSHOT_DT parameter. To
allow future-dated reporting snapshots are created up to 6 months
in advance. This fact is refreshed based on: Combination of
assignments and (earliest) event dates in the fact staging table
Any snapshots required for active assignments since the last load
(e.g. if the incremental load is not run for a while, or the system
date moves into a new month since the last load) Initial Load
Sessions PLP_WorkforceEventFact_Month_Full (loads new records)
Incremental Load Sessions PLP_WorkforceEventQueue_AsgMonth (adds to
the change queue table any assignments needing new snapshots since
the last load) PLP_WorkforceEventFact_Month_Mntn (deletes records
to be refreshed) PLP_WorkforceEventFact_Month (loads changed
records) Implementing HR Analytics using Universal Adaptors Oracle
Corporation |25 8.1.8.Workforce Aggregate Fact (W_WRKFC_BAL_A)
DimensionW_EMPLOYMENT_DWorkforce Dimension
AggregateW_EMPLOYMENT_STAT_CAT_DWorkforce Month Snapshot Fact
W_WRKFC_EVT_MONTH_FWorkforce Aggregate FactW_WRKFC_BAL_APLP
Dimension Aggregate Load FULL and INCRPLP Load ProcessFULL Change
Queue tableW_WRKFC_BAL_A_EQ_TMPPLP Parent level Update FULL and
INCRAggregate Fact is loaded directly by Employment Dimension but
it still remains at the grain ofthe Employment Stat Cat Aggregate
DimensionPLP Load processINCR only Workforce Month Fact
W_WRKFC_EVT_MONTH_FChange queue tableW_WRKFC_EVT_MONTH_EQ_TMP
PLPINCR only Aggregate dimension W_EMPLOYMENT_STAT_CAT_D is based
on the distinct Employment Status and Category available in
W_EMPLOYMENT_D table. Aggregate Fact table (W_WRKFC_BAL_A) is based
on the Snapshot Fact table W_WRKFC_EVT_MONTH_F and Aggregate
dimension W_EMPLOYMENT_STAT_CAT_D so as to improve performance of
Fact table W_WRKFC_EVT_MONTH_F. Aggregate Fact W_WRKFC_BAL_A is
loaded directly by Dimension W_EMPLOYMENT_ D (essentially remains
at the grain of Dimension Aggregate W_EMPLOYMENT_ STAT_CAT_D) and
Workforce Month Snapshot Fact W_WRKFC_EVT_MONTH_F. Implementing HR
Analytics using Universal Adaptors Oracle Corporation |26 Initial
Load Sessions PLP_EmploymentDimensionAggregate_Load_Full (Loads
Aggregate dimension W_EMPLOYMENT_STAT_CAT_D based on the distinct
Employment Status and Category available in W_EMPLOYMENT_D table.
PLP_EmploymentDimension_ParentLevelUpdate_Full(Aggregate dimension
W_EMPLOYMENT_STAT_CAT_D updates parent dimension W_EMPLOYMENT_D
table PLP_WorkforceBalanceAggregateFact_Load _Full (loads new
records into the Balance Aggregate Fact table based on
W_WRKFC_EVT_MONTH_F and the Aggregate Dimension
(W_EMPLOYMENT_STAT_CAT_D).Although it gets directly loaded from
W_EMPLOYMENT_D, the Balance Aggregate Fact remains at the grain of
the Aggregate Dimension (W_EMPLOYMENT_STAT_CAT_D)) Incremental Load
Sessions PLP_EmploymentDimensionAggregate_Load (Loads new rows into
Aggregate dimension W_EMPLOYMENT_STAT_CAT_D from current ETL run,
based on the distinct Employment Status and Category available in
W_EMPLOYMENT_D table.)
PLP_EmploymentDimension_ParentLevelUpdate(Aggregate dimension
W_EMPLOYMENT_STAT_CAT_D updates parent dimension W_EMPLOYMENT_D
table) PLP_WorkforceBalanceAggregateFact_Load (deletes records that
came in the event queue table (W_WRKFC_BAL_A_EQ_TMP)and loads new
records into the Balance Aggregate Fact table)
PLP_WorkforceBalanceQueueAggregate_PostLoad (Loads Event Queue
table W_WRKFC_BAL_A_EQ_TMP with records based on
W_WRKFC_EVT_MONTH_EQ_TMPandW_WRKFC_EVT_MONTH_F) Implementing HR
Analytics using Universal Adaptors Oracle Corporation |27
8.1.9.Workforce Aggregate Event Fact (W_WRKFC_EVT_A)
DimensionW_EMPLOYMENT_DWorkforce Dimension
AggregateW_EMPLOYMENT_STAT_CAT_DWorkforce Event Merge Fact
W_WRKFC_EVT_MERGE_FWorkforce Aggregate FactW_WRKFC_EVT_APLP
Dimension Aggregate Load FULL and INCRPLP Load ProcessFULL Change
Queue tableW_WRKFC_EVT_A_EQ_TMPPLP Parent level Update FULL and
INCRAggregate Fact is loaded directly by Employment Dimension but
it is at the grain ofthe Employment Stat Cat Aggregate
DimensionWorkforce Dimension
AggregateW_WRKFC_EVENT_GROUP_DDimensionW_WRKFC_EVENT_TYPE_DPLP
Dimension Aggregate Load FULL and INCRPLP Parent level Update FULL
and INCRAggregate Fact is loadeddirectly by Workforce Event Type
Dimension but it is at the grain ofthe Workforce Event Group
Aggregate DimensionW_WRKFC_EVT_MERGE_F W_WRKFC_EVT_EQ_TMPPLPINCR
onlyPLP Load processINCR only Aggregate dimension
(W_EMPLOYMENT_STAT_CAT_D) is based on the distinct Employment
Status and Category available in W_EMPLOYMENT_D table. Aggregate
dimension (W_WRKFC_EVENT_GROUP_D) is based on the Event Group and
Event Sub Group in the W_WRKFC_EVENT_TYPE_D dimension table.
W_WRKFC_EVT_A is an Aggregate Fact table based on the Merged Event
Fact table, W_WRKFC_EVT_MERGE_F , Aggregate dimension
W_WRKFC_EVENT_GROUP_D andAggregate dimension
W_EMPLOYMENT_STAT_CAT_D , to improve performance of
W_WRKFC_EVT_MERGE_F . Aggregate Fact W_WRKFC_EVT_A is loaded
directly by Dimension W_EMPLOYMENT_ D (essentially remains at the
grain of Dimension Aggregate W_EMPLOYMENT_STAT_CAT_D), Dimension
W_WRKFC_EVENT_TYPE_D (essentially remains at the grain of Dimension
Aggregate W_WRKFC_EVENT_GROUP_D) and Workforce Fact
W_WRKFC_EVT_MERGE_F. Implementing HR Analytics using Universal
Adaptors Oracle Corporation |28 Initial Load
SessionsPLP_EmploymentDimensionAggregate_Load_Full (Loads Aggregate
dimension W_EMPLOYMENT_STAT_CAT_D based on the distinct Employment
Status and Category available in W_EMPLOYMENT_D table.)
PLP_EmploymentDimension_ParentLevelUpdate_Full(Aggregate dimension
W_EMPLOYMENT_STAT_CAT_D updates parent dimension W_EMPLOYMENT_D
table) PLP_WorkforceEventGroupDimensionAggregate_Load_Full (Loads
Aggregate dimension (W_WRKFC_EVENT_GROUP_D) based on the Event
Group and Event Sub Group in the W_WRKFC_EVENT_TYPE_D dimension
table.) PLP_WorkforceEventGroupDimension_ParentLevelUpdate
(Aggregate dimension (W_WRKFC_EVENT_GROUP_D) updates
EVENT_GROUP_WID of parent level dimension (W_WRKFC_EVENT_TYPE_D))
PLP_WorkforceEventAggregateFact_Full (loads new records into the
Event Aggregate Fact table (W_WRKFC_EVT_A) based on Workforce Fact
table (W_WRKFC_EVT_MERGE_ F), Aggregate Dimension
(W_EMPLOYMENT_STAT_CAT_D) and Aggregate Dimension
(W_WRKFC_EVENT_GROUP_D). Although it gets directly loaded from
W_EMPLOYMENT_D and W_WRKFC_EVENT_TYPE_D, the Balance Aggregate Fact
remains at the grain of the Aggregate Dimensions
(W_EMPLOYMENT_STAT_CAT_D and W_WRKFC_EVENT_GROUP_D)) Incremental
Load Sessions PLP_EmploymentDimensionAggregate_Load (Loads new rows
into Aggregate dimension W_EMPLOYMENT_STAT_CAT_D from current ETL
run, based on the distinct Employment Status and Category available
in W_EMPLOYMENT_D table.)
PLP_EmploymentDimension_ParentLevelUpdate(Aggregate dimension
W_EMPLOYMENT_STAT_CAT_D updates parent dimension W_EMPLOYMENT_D
table) PLP_WorkforceEventGroupDimensionAggregate_Load (Loads new
rows into Aggregate dimension (W_WRKFC_EVENT_GROUP_D) from current
ETL run, based on the Event Group and Event Sub Group in the
W_WRKFC_EVENT_TYPE_D dimension table.)
PLP_WorkforceEventGroupDimension_ParentLevelUpdate(Aggregate
dimension (W_WRKFC_EVENT_GROUP_D) updatesparent level dimension
(W_WRKFC_EVENT_TYPE_D)) PLP_WorkforceEventAggregateFact (deletes
records that came in the event queue table
(W_WRKFC_EVT_A_EQ_TMP)and loads new records into the Workforce
Event Aggregate Fact table)
PLP_WorkforceEventQueueAggregate_PostLoad (Loads Event Queue table
W_WRKFC_EVT_A_EQ_TMP with records based on
W_WRKFC_EVT_EQ_TMPandW_WRKFC_EVT_F) Implementing HR Analytics using
Universal Adaptors Oracle Corporation |29 8.1.10. Handling Deletes
W_WRKFC_EVT_FW_WRKFC_EVT_F_PEW_WRKFC_EVT_DEL_FSource OLTPAny fact
record where: Fact integration key is not in the primary extract
table, or Fact assignment key is not in the primary extract
tableIntegration keys or assignmentsSet delete flagRecords to be
deleted All the standard OBIA mappings are provided for processing
deletes (Primary Extract, Identify Deletes, and Soft Delete).
However because of the added complexity of maintaining the
date-track (continuous set of effective start/end dates per
assignment) the functionality differs slightly. There are two types
of delete to make a distinction between: Date-tracked delete a
single record is deleted for an assignment, but others remain Purge
all records for an assignment are deleted, the assignment no longer
exists on the source transaction system These are discussed in more
detail below. Implementing HR Analytics using Universal Adaptors
Oracle Corporation |30 8.1.11. Propagating to derived facts The
incremental load for derived facts will automatically detect any
records deleted via the delete process (W_WRKFC_EVT_F_DEL). Deleted
records will be physically removed from the derived fact tables as
part of the incremental refresh. 8.1.12. Date-tracked Deletes To
delete individual records using the standard delete mappings the
primary keys of the fact should be extracted into the primary
extract table. Then the identify delete mapping will compare the
primary extract table with the fact table and the soft delete
mapping will flag as deleted any record in the fact which is not in
the primary extract table. Universal Adaptor Recommendation is that
the same methods are used as for Oracle EBS or PeopleSoft. Always
stage the changed fact records and any subsequent records and allow
the incremental fact load to take care of the deletes.
Alternatively the behavior can be altered, but the burden of
maintaining the date-track correctly would fall to the customer. To
alter the behavior so that deletes are done only by the standard
delete process the mapping PLP_WorkforceEventFact_Mntn should be
disabled. However, see the worked example below for the kind of
date-track maintenance that is required to keep the fact error
free. 8.1.13. Purges To purge all records for an assignment using
the standard delete mappings the distinct assignment ids should be
extracted into the primary extract table. Then the identify delete
mapping will compare the primary extract table with the fact table
and the soft delete mapping will flag as deleted all records for
assignments in the fact which are not in the primary extract table.
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |31 8.1.14. Primary Extract
W_WRKFC_EVT_F_PEDATASOURCE_NUM_IDINTEGRATION_ID
(ASSIGNMENT_ID)Source
OLTPW_WRKFC_EVT_F_PEDATASOURCE_NUM_IDINTEGRATION_IDEither / Or
Extract from the source OLTP either the valid assignments or valid
integration keys for the fact. The delete process will delete fact
records with no valid assignment (purge) and no valid integration
key (individual record delete). This step can be skipped if there
is an alternative method (e.g. source trigger) of detecting the
purges or deletes and pushing the fact keys to delete directly to
the W_WRKFC_EVT_F_DEL table. The recommendation is to use the purge
only extract the distinct valid assignment ids. If the other option
is used then care should be taken to leave the fact consistent. See
the worked example below. 8.1.15. Identify Delete W_WRKFC_EVT_F
W_WRKFC_EVT_F_PEW_WRKFC_EVT_DEL_FAny fact record where: Fact
integration key is not in the primary extract table, or Fact
assignment key is not in the primary extract tableRecords to be
deleted Implementing HR Analytics using Universal Adaptors Oracle
Corporation |32 Compares the primary extract table with the fact
table to detect purges or deletes. The primary keys of fact records
to be deleted are inserted into the delete table. This step can be
skipped if there is an alternative method (e.g. source trigger) of
detecting the purges or deletes and pushing the fact keys to delete
directly to the W_WRKFC_EVT_F_DEL table. Incremental Load Sessions
SIL_WorkforceEventFact_IdentifyDelete 8.1.16. Soft Delete
W_WRKFC_EVT_FW_WRKFC_EVT_DEL_FSet delete flag This updates the
delete flag to Y (Yes) for fact records in the delete table.
Incremental Load Sessions SIL_WorkforceEventFact_SoftDelete
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |33 8.1.17. Date-Tracked Deletes - Worked Example The
recommended way of handling date-tracked deletes in the workforce
fact is to always stage changed records (in the case of a delete
the previous record) and allow the fact incremental load mappings
handle the changes. The following example shows what can happen if
the fact is not maintained correctly when deleting records.
W_WRKFC_EVT_F Suppose after initial load the following data was
loaded in the fact table for assignment 1: AssignmentStart DateEnd
DateChange TypeOrganizationSalary 101-Jan-200031-Dec-2000HIREA5000
101-Jan-200131-Dec-2001REVIEWA6000
101-Jan-200231-Dec-2002TRANSFERB6000
101-Jan-200301-Jan-3714REVIEWB7000 Now suppose the transfer record
was deleted on the source transaction system. If this was handled
by the primary extract identify delete soft delete mappings then
there would be the following records left in the fact table (delete
flag = N): AssignmentStart DateEnd DateChange
TypeOrganizationSalary 101-Jan-200031-Dec-2000HIREA5000
101-Jan-200131-Dec-2001REVIEWA6000
101-Jan-200301-Jan-3714REVIEWB7000 This is wrong on two counts:
1.The date-track is not continuous, so downstream ETL may fail or
lose data. Also reports in Answers may not return data for the gaps
in the date-track. 2.The data is not consistent since the transfer
has been deleted the REVIEW on 01-Jan-2003 should not still be
showing organization B. The first issue would be reasonably simple
to fix with an update (either pushing the updated record into the
fact staging table, or if directly updating the fact it would be
necessary to track the event (effective start) date of the updated
row in W_WRKFC_EVT_EQ_TMP). However the second issue is more
complex. By allowing the fact incremental load to take care of the
deletes these issues are avoided. Implementing HR Analytics using
Universal Adaptors Oracle Corporation |34 8.2.Recruitment Fact
Process 8.2.1.ETL Flow
JobRequisitionEventFactW_JOB_RQSTN_EVENT_FJobRequisitionAccumulatedSnapshotFactW_JOB_RQSTN_ACC_SNP_FRecruitmentPipelineFactW_RCRTMNT_EVENT_FRecruitmentJobRequisitionAggregateW_RCRTMNT_RQSTN_AJobRequisitionEventFactStagingW_JOB_RQSTN_EVENT_FSApplicantEventFactStagingW_APPL_EVENT_FSApplicantEventFactW_APPL_EVENT_FApplicantAccumulatedSnapshotFactW_APPL_ACC_SNP_FRecruitmentApplicantAggregateW_RCRTMNT_APPL_ARecruitmentHireAggregateW_RCRTMNT_HIRE_AJobRequisitionEventFactCSVfileinputApplicantEventFactCSVfileinput
Terminology Assignment is used to refer to an instance of a person
in a job. It should not be an update-able key on the source
transaction system. Applicant is used to refer to the person who is
applying for the posted vacancy. He/she can be an existing
employee, an ex-employee of the organization or an external
candidate. Hiring Manager is used to refer to the person to whom
the incumbent would report to, once hired.Implementing HR Analytics
using Universal Adaptors Oracle Corporation |35 Key Steps and Table
Descriptions TablePrimary SourcesGrainDescription
W_JOB_RQSTN_FSFlat file Source adaptors One row per job requisition
per job requisition event per event date Records job requisition
events for all job requisitions from open through to
close/fulfillment.W_APPL_EVENT_FSFlat file Source adaptors One row
per application per job requisition event per event date and
sequence Records application events for all applications from
applying, screening, selection though offer extension, hire or
termination of the application.
W_JOB_RQSTN_ACC_SNP_FW_JOB_RQSTN_FOne row per job requisition
Records job requisition related event dates, de-normalized.
W_APPL_ACC_SNP_FW_APPL_EVENT_FOne row per application Records
application related event dates, de-normalized.
W_RCRTMNT_EVENT_FW_JOB_RQSTN_F W_APPL_EVENT_F W_JOB_RQSTN_ACC_SNP_F
W_APPL_ACC_SNP_F One row per recruitment event type per event date
and sequence Merges the job requisition events and application
events along with de-normalized event dates. Also known as the
Recruitment Pipeline fact. W_RCRTMNT_RQSTN_AW_RCRTMNT_EVENT_F
W_MONTH_D One row per job requisition per recruitment event month
Aggregates the job requisition related metrics at a monthly grain.
W_RCRTMNT_APPL_AW_RCRTMNT_EVENT_F W_MONTH_D One row per applicants
demographics per recruitment event month Aggregates the applicant
related metrics at a monthly grain.
W_RCRTMNT_HIRE_AW_RCRTMNT_EVENT_F W_MONTH_D One row per hired
applicants demographics per recruitment event month Aggregates the
applicant related metrics with a focus on hired applicants only at
a monthly grain. Key Setup/Configuration Steps All the set up and
configuration steps that are required for core Workforce also
applies for Recruitment (see the same section for Workforce). For
Universal adaptors, there are no more extra configuration steps
required as long as the domain values are configured accurately.
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |36 8.2.2.Job Requisition Event & Application Event
Facts (W_JOB_RQSTN_EVENT_F & W_APPL_EVENT_F)These two tables
are loaded via the corresponding Universal Staging tables
(W_JOB_RQSTN_EVENT_FS and W_APPL_EVENT_FS) via the flat files. The
columns to be populated, mandatory or non-mandatory and all other
related information exists in the associated excel spreadsheet
(HR_Analytics_UA_Lineage.xlsx).JobRequisitionEventFactW_JOB_RQSTN_EVENT_FJobRequisitionAccumulatedSnapshotFactW_JOB_RQSTN_ACC_SNP_FApplicantEventFactW_APPL_EVENT_FApplicantAccumulatedSnapshotFactW_APPL_ACC_SNP_FFULLloadprocess(JobReq.)FULLloadprocess(Applicant)JobReq.AgeBandEvents(FULL)ApplicantGeneratedEvents(FULL)JobRequisitionEventFactStageW_JOB_RQSTN_EVENT_FSApplicantEventFactStageW_APPL_EVENT_FS
JobRequisitionEventFactW_JOB_RQSTN_EVENT_FJobRequisitionAccumulatedSnapshotFactW_JOB_RQSTN_ACC_SNP_FApplicantEventFactW_APPL_EVENT_FApplicantAccumulatedSnapshotFactW_APPL_ACC_SNP_FINCRloadprocess(JobReq.)INCRloadprocess(Applicant)JobReq.AgeBandEvents(INCR)ApplicantGeneratedEvents(INCR)JobRequisitionEventFactStageW_JOB_RQSTN_EVENT_FSApplicantEventFactStageW_APPL_EVENT_FSApplicantPOWEvents(INCR)
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |37 Initial Load Sessions
SIL_JobRequisitionEventFact_Full (loads new records)
SIL_ApplicantEventFact_Full (loads new records)
PLP_JobRequisition_AgeBandEvents_Full (deletes and creates
requisition age band change event records for those job
requisitions that are supposed to enter a new requisition age band
based on their current age since opening, and applies this new
event to the job requisition accumulated snapshot fact)
PLP_ApplicantEventFact_GeneratedEvents_Full (deletes and generates
pseudo applicant events that were not supplied by source system,
but necessary for the complete analysis of the recruitment pipeline
process) Incremental Load Sessions SIL_JobRequisitionEventFact
(updates changed records and loads new records)
PLP_JobRequisition_AgeBandEvents (deletes and creates requisition
age band change event records for those job requisitions that are
supposed to enter a new requisition age band based on their current
age since opening and applies this new event to the job requisition
accumulated snapshot fact) SIL_ApplicantEventFact (updates changed
records and loads new records)
PLP_ApplicantEventFact_GeneratedEvents_Full(deletes and generates
pseudo applicant events that were not supplied by source system,
but necessary for the complete analysis of the recruitment pipeline
process) PLP_ApplicantEventFact_PeriodOfWorkEvents (generates
pseudo period-work-work-band-crossing events in case a hired
applicant crosses his/her first period of work band; this is
something that the source does not give and we do it for all
applicants assuming all of them will be hired and stay for the
first period-of-work-band timeframe) 8.2.3.Job Requisition
Accumulated Snapshot Fact (W_JOB_RQSTN_ACC_SNP_F) This table stores
the de-normalized dates against various job requisition events from
the Job Requisition Events base fact table. After the pseudo Age
Band Change events are populated in the base Job Requisition fact
table, those dates are also reflected in the Accumulated snapshot
fact table. Any changes to the Hiring Manager Position Hierarchy
are also updated in this accumulated snapshot fact. Note that the
updates because of hierarchy changes do not apply during full ETL
run. Implementing HR Analytics using Universal Adaptors Oracle
Corporation |38
JobRequisitionEventFactW_JOB_RQSTN_EVENT_FJobRequisitionAccumulatedSnapshotFactW_JOB_RQSTN_ACC_SNP_FPositionHierarchyPostChangeTemporaryW_POSITION_DH_POST_CHG_TMPJobReq.AgeBandEvents(FULL&INCR)PositionHierarchyUpdateProcess(INCRONLY)JobReq.Load(FULL&INCR)AgeBandDimensionW_AGE_BAND_DPositionHierarchyPreChangeTemporaryW_POSITION_DH_PRE_CHG_TMP
Initial Load Sessions PLP_JobRequisition_AccumulatedSnapshot_Full
(loads new records) PLP_JobRequisition_AgeBandEvents_Full (after
the age-band-change events are recorded in the base Job Requisition
fact table, this mapping updates the related date column in the
accumulated snapshot fact table) Incremental Load Sessions
PLP_JobRequisition_AccumulatedSnapshot (updates changed records and
loads new records) PLP_JobRequisition_AgeBandEvents(after the
age-band-change events are recorded in the base Job Requisition
fact table, this mapping updates the related date column in the
accumulated snapshot fact table)
PLP_JobRequisition_AccumulatedSnapshot_PositionHierarchy_Update
(changes in the Hiring Manager Position Hierarchy due to regular or
back dated changes are applied to the accumulated snapshot fact
table) 8.2.4.Applicant Accumulated Snapshot Fact (W_APPL_ACC_SNP_F)
This table stores the de-normalized dates against various applicant
events from the Applicant Events base fact table. After the pseudo
Age Band Change, Period of Work Band Change and other Missing
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |39 recruitment pipeline events are populated in the
base Applicant Event fact table those dates are also reflected in
the Accumulated snapshot fact
table.ApplicantEventFactW_APPL_EVENT_FApplicantAccumulatedSnapshotFactW_APPL_ACC_SNP_FPeriodofWorkBandEvents(FULL&INCR)ApplicantLoad(FULL&INCR)PeriodofWorkBandDimensionW_PRD_OF_WRK_BAND_D
Initial Load Sessions PLP_Applicant_AccumulatedSnapshot_Full (loads
new records) Incremental Load Sessions
PLP_Applicant_AccumulatedSnapshot (updates changed records and
loads new records) 8.2.5.Recruitment Pipeline Event Fact
(W_RCRTMNT_EVENT_F) This is the main Recruitment Pipeline event
fact table that is used for the reporting needs, and also is used
to build aggregate tables at three different grains for reporting
purposes. The main purpose of this table is to merge both sides of
the recruitment events (job requisition events as well as applicant
events) and on top of that provide some value added metrics. At
first, the image of this table is captured prior to loading any
data into an Event Queue table, the purpose of which is to track
all changes that are about to happen in this current run. Since
this pre-image is captured by comparing to the main pipeline fact
table, this pre-imaging process does not apply during full ETL run.
Note that this pre-imaging process occurs from both sides (job
requisition events as well as applicant events) and apart from the
event queue table; these processes also populate Implementing HR
Analytics using Universal Adaptors Oracle Corporation |40 another
temporary table (W_RCRTMNT_EVENT_F_TMP) which comes in handy during
aggregate building. Next, the data from either side are brought
into the pipeline fact. The Event queue table drives the merge
process during incremental runs to get better performance. Once the
data is loaded, a post-image process captures the image of the
loaded pipeline fact table and writes it to the temporary table
W_RCRTMNT_EVENT_F_TMP. This becomes the driver table for the rest
of three aggregate building.
JobRequisitionEventFactW_JOB_RQSTN_EVENT_FJobRequisitionAccumulatedSnapshotFactW_JOB_RQSTN_ACC_SNP_FRecruitmentPipelineFactW_RCRTMNT_EVENT_FApplicantEventFactW_APPL_EVENT_FApplicantAccumulatedSnapshotFactW_APPL_ACC_SNP_FFullloadprocess(JobReq.)Fullloadprocess(Applicant)
Process flow for INITIAL Load of the Recruitment Pipeline fact
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |41
JobRequisitionEventFactW_JOB_RQSTN_EVENT_FJobRequisitionAccumulatedSnapshotFactW_JOB_RQSTN_ACC_SNP_FRecruitmentPipelineFactW_RCRTMNT_EVENT_FApplicantEventFactW_APPL_EVENT_FApplicantAccumulatedSnapshotFactW_APPL_ACC_SNP_FRecruitmentPipelineEventQueueW_RCRTMNT_EVENT_F_EQ_TMPRecruitmentPipelineTemporaryW_RCRTMNT_EVENT_F_TMPPre
Image
Process
(Job
Req.)Pre
Image
Process
(Applicant)
JobRequisitionEventFactW_JOB_RQSTN_EVENT_FJobRequisitionAccumulatedSnapshotFactW_JOB_RQSTN_ACC_SNP_FRecruitmentPipelineFactW_RCRTMNT_EVENT_FApplicantEventFactW_APPL_EVENT_FApplicantAccumulatedSnapshotFactW_APPL_ACC_SNP_FRecruitmentPipelineEventQueueW_RCRTMNT_EVENT_F_EQ_TMPIncrementalloadprocess(JobReq.)Incrementalloadprocess(Applicant)
RecruitmentPipelineFactW_RCRTMNT_EVENT_FRecruitmentPipelineTemporaryW_RCRTMNT_EVENT_F_TMPPostImageProcess
Process flow for INCREMENTAL Load of the Recruitment Pipeline fact
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |42 Initial Load Sessions
PLP_RecruitmentEventFact_Applicants_Full (loads new records)
PLP_RecruitmentEventFact_JobRequisitions_Full (loads new records)
Incremental Load Sessions
PLP_RecruitmentEventFact_Applicants_PreImage (takes a pre-image of
the pipeline fact before the new applicant events are loaded; in
other words, determine what is about to change)
PLP_RecruitmentEventFact_JobRequisitions_PreImage (takes a
pre-image of the pipeline fact before the job requisition events
are loaded; in other words, determine what is about to change)
PLP_RecruitmentEventFact_Applicants (deletes old records that came
in the event queue and re-processes and re-inserts them; new
records are inserted) PLP_RecruitmentEventFact_JobRequisitions
(deletes old records that came in the event queue and re-processes
and re-inserts them; new records are inserted)
PLP_RecruitmentEventFact_PostImage (takes a post-image of the
pipeline fact after all the changes are done to it)
8.2.6.Recruitment Job Requisition Aggregate Fact
(W_RCRTMNT_RQSTN_A) This table stores aggregated measures
applicable to Job Requisitions, at a monthly level. The load of
this table drives from the Pipeline fact the temporary table
W_RCRTMNT_EVENT_F_TMP that was populated during the process of
loading the Pipeline fact. During full load, the metrics get
aggregated into a temporary table W_RCRTMNT_RQSTN_A_TMP2, which
gets subsequently updated to set the effective to date column of
the end aggregate table, and finally gets loaded to the end
aggregate table. During incremental, an additional process, driven
by the pre-populated temporary table W_RCRTMNT_EVENT_F_TMP that
tracks the changes affected in the Pipeline fact in the current ETL
run, loads yet another temporary table W_RCRTMNT_RQSTN_A_TMP1. The
following aggregation of metrics to the second temporary table
W_RCRTMNT_RQSTN_A_TMP2 is similar to that of the full load, and so
are the remaining processes (updating effective to dates, and
loading the end aggregate table). The aggregate table has an
EFFECTIVE_FROM_DT and EFFECTIVE_TO_DT column. In order to cater for
balance metrics (non-event ones that are non-additive), these dates
help to avoid create unnecessary monthly snapshots if nothing has
changed for a Job Requisition. The effective from date is the date
of the last event happened in the month, and the effective to date
is the last day of that month minus one day. Implementing HR
Analytics using Universal Adaptors Oracle Corporation |43 The
overall ETL process for this table is explained in the following
diagrams.
RecruitmentPipelineFactW_RCRTMNT_EVENT_FRecruitmentRequisitionAggregateW_RCRTMNT_RQSTN_ADeriveProcess(FULL)UpdateProcess(Common)TimeDimension-DayW_DAY_DTimeDimension-MonthW_MONTH_DRequisitionAggregateTemp-2W_RCRTMNT_RQSTN_A_TMP2Loadprocess(FULL)
RecruitmentPipelineTemporaryW_RCRTMNT_EVENT_F_TMPRecruitmentPipelineFactW_RCRTMNT_EVENT_FRecruitmentRequisitionAggregateW_RCRTMNT_RQSTN_ALoadprocess(INCR)TimeDimension-DayW_DAY_DTimeDimension-MonthW_MONTH_DRequisitionAggregateTemp-1W_RCRTMNT_RQSTN_A_TMP1RequisitionAggregateTemp-2W_RCRTMNT_RQSTN_A_TMP2Deriveprocess(INCR)Updateprocess(COMMON)Extractprocess(INCR)
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |44 Initial Load Sessions
PLP_RecruitmentRequisitionAggregate_Load_Derive_Full (loads new
records into the TMP2 table)
PLP_RecruitmentRequisitionAggregate_Load_Update (updates effective
to date back into the same TMP2 table)
PLP_RecruitmentRequisitionAggregate_Load_Full (loads new records
into the end aggregate table) Incremental Load Sessions
PLP_RecruitmentRequisitionAggregate_Extract (extracts records into
a TMP1 table based on changes that happened in the Pipeline fact in
the current ETL run)
PLP_RecruitmentRequisitionAggregate_Load_Derive (loads new and
changed records into the TMP2 table)
PLP_RecruitmentRequisitionAggregate_Load_Update (updates effective
to date back into the same TMP2 table)
PLP_RecruitmentRequisitionAggregate_Load (deletes and re-loads new
and changed records into the end aggregate table) 8.2.7.Recruitment
Applicant Aggregate Fact (W_RCRTMNT_APPL_A) This mapping aggregates
the applicable recruitment pipeline metrics and groups by all the
dimensions in the Applicant Analysis Aggregate Fact table, at a
monthly grain. All applicants that pass through the Recruitment
pipeline process gets in this aggregate table. During incremental
load, the process deletes the records that are about to get
impacted because of changes in the Pipeline fact table, then
re-processes them.
RecruitmentPipelineFactW_RCRTMNT_EVENT_FRecruitmentApplicantAggregateW_RCRTMNT_APPL_ATimeDimension-DayW_DAY_DTimeDimension-MonthW_MONTH_DLoadprocess(FULL)
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |45
RecruitmentPipelineFactW_RCRTMNT_EVENT_FRecruitmentApplicantAggregateW_RCRTMNT_APPL_AExtractProcess(INCR)TimeDimension-DayW_DAY_DTimeDimension-MonthW_MONTH_DApplicantAggregateTemporaryW_RCRTMNT_APPL_A_TMPLoadprocess(INCR)RecruitmentPipelineFactTemporaryW_RCRTMNT_EVENT_F_TMP
Initial Load Sessions PLP_RecruitmentApplicantAggregate_Load_Full
(loads new records) Incremental Load Sessions
PLP_RecruitmentApplicantAggregate_Extract (loads new and changed
records into a temporary table that got impacted in the Pipeline
fact during the current ETL run)
PLP_RecruitmentApplicantAggregate_Load (deletes, re-processes and
re-loads the data, both changed and new, into the end aggregate
table) 8.2.8.Recruitment Hire Aggregate Fact (W_RCRTMNT_HIRE_A)
This mapping aggregates the applicable recruitment pipeline metrics
and groups by all the dimensions in the Hire Analysis Aggregate
Fact table, at a monthly grain. Only applicants that are hired gets
in this aggregate table. During incremental load, the process
deletes the records that are about to get impacted because of
changes in the Pipeline fact table, then re-processes them.
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |46
RecruitmentPipelineFactW_RCRTMNT_EVENT_FRecruitmentHireAggregateW_RCRTMNT_HIRE_ATimeDimension-DayW_DAY_DTimeDimension-MonthW_MONTH_DLoadprocess(FULL)
RecruitmentPipelineFactW_RCRTMNT_EVENT_FRecruitmentHireAggregateW_RCRTMNT_HIRE_AExtractProcess(INCR)TimeDimension-DayW_DAY_DTimeDimension-MonthW_MONTH_DHireAggregateTemporaryW_RCRTMNT_HIRE_A_TMPLoadprocess(INCR)RecruitmentPipelineFactTemporaryW_RCRTMNT_EVENT_F_TMP
Initial Load Sessions PLP_RecruitmentHireAggregate_Load_Full (loads
new records) Incremental Load Sessions
PLP_RecruitmentHireAggregate_Extract (loads new and changed records
into a temporary table that got impacted in the Pipeline fact
during the current ETL run) PLP_RecruitmentHireAggregate_Load
(deletes, re-processes and re-loads the data, both changed and new,
into the end aggregate table) Implementing HR Analytics using
Universal Adaptors Oracle Corporation |47 8.3.Absence Fact Process
8.3.1.ETL Flow Absence Event DimensionW_ABSENCE_EVENT_DDay
DimensionW_DAY_DAbsence FactW_ABSENCE_EVENT_FAbsence Event Dim
StagingW_ABSENCE_EVENT_DSAbsence Type Reason Dim
StagingW_ABSENCE_TYPE_RSN_DSAbsence Event Dimension CSV File
InputAbsence Type Reason Dimension CSV file inputWorkforce
FactW_WRKFC_EVT_FAbsence Type Rsn DimensionW_ABSENCE_TYPE_RSN_D
Terminology Absence Occurrence is used to refer to one instance of
Absence for an employee from start to end.Absence Duration is the
time lost away from work due to the Absence. Absence Type is a
category of Absence that the system tracks, such as illness,
vacation, and leave Absence Reason , within each Absence Type, you
can create a set of Absence reasons that further classify Absences.
For example, if you create an Absence Type called illness, you
might set up reason such as cold, flu and stress.Implementing HR
Analytics using Universal Adaptors Oracle Corporation |48 Key Steps
and Table Descriptions TablePrimary SourcesGrainDescription
W_ABSENCE_EVENT_DSFlat file Source adaptors One record per absence
occurrence for a given employee/absentee and his/her assignment.
Stores the Absence Occurrences for each Employee (Dimension Staging
table). W_ABSENCE_TYPE_RSN_DSFlat file Source adaptors One record
per valid Absence Type and Reason combination for one set of
records. To handle situations where a Reason is not available in
the transaction, add another set of records for each valid Absence
Type only (no reasons). Stores absence type, reason and category
information. ( Dimension Staging table)
W_ABSENCE_TYPE_RSN_DW_ABSENCE_TYPE_RSN_DSOne record per valid
Absence Type and Reason combination for one set of records. To
handle situations where a Reason is not available in the
transaction, add another set of records for each valid Absence Type
only (no reasons). Stores Absence type, reason and category
information. W_ABSENCE_EVENT_DW_ABSENCE_EVENT_DS
W_ABSENCE_TYPE_RSN_D One row per Absence Occurrence for a given
employee and his/her assignment. Stores the Absence Occurrences for
each Employee W_ABSENCE_EVENT_FW_ABSENCE_EVENT_D W_DAY_D
W_WRKFC_EVT_F One row per Absence day per Absence Occurrence for
given employee and his/her assignment. Stores One row per Absence
day per Absence Occurrence for given employee and his/her
assignment. Key Setup/Configuration Steps All the set up and
configuration steps that are required for core Workforce also
applies for Absence.(See the same section for Workforce). For
Universal adaptors, there are no more extra configuration steps
required as long as the domain values are configured accurately.
Please refer to more detailed configurations of Absence Event
Dimension in the associated excel spreadsheet
(HR_Analytics_UA_Lineage.xlsx).Implementing HR Analytics using
Universal Adaptors Oracle Corporation |49 8.3.2.Absence Event Fact
(W_ABSENCE_EVENT_F) This table is loaded using the two dimension
tables W_ABSENCE_EVENT_D and W_ABSENCE_TYPE_RSN_D along with time
dimension. The dimension tables are loaded via their corresponding
Universal Staging area tables (W_ABSENCE_EVENT_DS and
W_ABSENCE_TYPE_RSN_DS). The columns to be populated, mandatory or
non-mandatory and all other related information exists in the
associated excel spreadsheet (HR_Analytics_UA_Lineage.xlsx).
W_ABSENCE_EVENT_DSW_ABSENCE_EVENT_D
W_ABSENCE_TYPE_RSN_DW_ABSENCE_TYPE_RSN_DSW_ABSENCE_EVENT_FSIL Load
ProcessFULLSIL Load ProcessFULL Load ProcessFULL Implementing HR
Analytics using Universal Adaptors Oracle Corporation |50 Position
Hierarchy Post Change TemporaryW_POSITION_DH_POST_CHG_TMPPosition
Hierarchy Update Process INCR Position Hierarchy Pre Change
TemporaryW_POSITION_DH_PRE_CHG_TMPW_ABSENCE_EVENT_DSW_ABSENCE_EVENT_DW_ABSENCE_TYPE_RSN_DW_ABSENCE_TYPE_RSN_DSW_ABSENCE_EVENT_FSIL
INCR Load ProcessSIL Incr Load Process Load
ProcessINCRW_ABSENCE_EVENT_EQ_TMPW_WRKFC_EVT_EQ_TMP Mntn
ProcessINCREvent Queue Process INCR Initial Load Sessions
SIL_AbsenceEventDimension _Full(loads new/updated records from
Absence Event Staging Dimension and Absence Type Reason Dimension)
SIL_AbsenceTypeReasonDimension_Full (loads new/updated records from
Absence Type Reason Staging Dimension) PLP_AbsenceEventFact_Full
(loads new/updated records from Absence Event Dimension)
Incremental Load Sessions SIL_AbsenceEventDimension(loads
new/updated records from Absence Event Staging Dimension and
Absence Type Reason Dimension) SIL_AbsenceTypeReasonDimension
(loads new/updated records from Absence Type Reason Staging
Dimension) Implementing HR Analytics using Universal Adaptors
Oracle Corporation |51 PLP_AbsenceEventFact (loads new/updated
records from Absence Event Dimension, Day Dimension and Workforce
Fact) PLP_AbsenceEventQueue_Event (loads change queue table with
changed (staged) assignmentsand their earliest change (staged
event) date for each Absence Event ) PLP_AbsenceEventFact_Mntn(
deletes obsolete Absence records from Absence Fact based on the
change queue table (W_ABSENCE_EVENT_EQ_TMP )
PLP_AbsenceEventFact_PositionHierarchy_Update (changes in the
Absent Employees Position Hierarchy due to regular or back dated
changes are applied to the Absence fact table) Implementing HR
Analytics using Universal Adaptors Oracle Corporation |52
8.4.Learning Fact Process 8.4.1.ETL Flow
LearningEnrollmentSNPFactCSVFileInputLearningEnrollmentSNPFactStagingW_LM_ENROLLMENT_ACC_SNP_FSLearningEnrollmentSnapshotFactW_LM_ENROLLMENT_ACC_SNP_FLearningEnrollmentFactW_LM_ENROLLMENT_EVENT_FLearningGradeBandDimensionW_LM_GRADE_BAND_DFILE_LEARNING_GRADE_BANDFileInputFILE_ROW_GEN_BANDFileInput
Terminology
Learning Course describes different courses in a learning
management .It would have attributes such Code, Name, Description
and so on. Learning Activity describes an instance (class) of a
courseLearning Enrollment Status describes different enrollment
statuses in a learning management. For Example Enrolled, Completed,
Approved, etc Learning Program represents a significant learning
goal that can be achieved by completing multiple learning
activities. Implementing HR Analytics using Universal Adaptors
Oracle Corporation |53 Key Steps and Table Descriptions
TablePrimary SourcesGrainDescription W_LM_ENROLLMENT_ACC_SNP_FS
Flat file Source adaptors The grain of this table is a single
activity related to a given course on a given enrollment status
date. W_LM_GRADE_BAND_DFlatfile SourcesOne row per Learning Grade
Band and one row for each learning Score Learning Grade Band
Dimension stores data for Grade/Scoring Bands for Learning
Activities W_LM_ENROLLMENT_ACC_SNP_F W_LM_ENROLLMENT_ACC_SNP_FS One
row per enrollment per learner/Employee per learning activity For
example, an employee requests, enrolls and completes a learning
activity; there will be one row in this table. Accumulated snapshot
fact table captures each learner's enrollment to a learning
activity. W_LM_ENROLLMENT_EVENT_F W_LM_ENROLLMENT_ACC_SNP_F
W_LM_GRADE_BAND_D Its grain is Learner/Employee +Learning Activity
+Status.For example, an employee requests, enrolls and completes a
learning activity; there will be 3 rows in this table, one for each
of the statuses. This fact table stores the status changes for the
learning enrollment process. Key Setup/Configuration Steps All the
set up and configuration steps that are required for core Workforce
also applies for Learning. (See the same section for Workforce).
For Universal adaptors, there are no more extra configuration steps
required as long as the domain values are configured accurately.
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |54 8.4.2.Learning Enrollment Accumulated Snapshot Fact
(W_LM_ENROLLMENT_ACC_SNP_F) W_LM_ENROLLMENT_ ACC_SNP_F accumulated
snapshot fact table captures each learner's enrollment to a
learning activity and the status changes. The grain of this table
is at Learner/Employee +Learning Activity level. For example, an
employee requests, enrolls and completes a learning activity; there
will be one row in this table. Position Hierarchy Post Change
TemporaryW_POSITION_DH_POST_CHG_TMPPosition Hierarchy Update
Process INCR ONLYPosition Hierarchy Pre Change
TemporaryW_POSITION_DH_PRE_CHG_TMPW_LM_ENROLLMENT_ACC_SNP_FSSIL
Load ProcessFULL and INCRPLP Load ProcessFULL and
INCRW_LM_ENROLLMENT_ACC_SNP_F Initial Load Sessions
SIL_LearningEnrollmentFact_Full (This mapping is responsible for
loading fact records for thetable W_LM_ENROLLMENT_ACC_SNP_F based
on the corresponding staging area table.) Implementing HR Analytics
using Universal Adaptors Oracle Corporation |55
PLP_LearningEnrollmentUpdate_Full (This mapping is used to update
the initial waitlisted and enrollment initiated dates in the
Learning Enrollment Accumulative Snapshot Fact table,
W_LM_ENROLLMENT_ACC_SNP_F.) Incremental Load Sessions
SIL_LearningEnrollmentFact(This mapping takes care of inserting new
records as well as updating existing records in the target table
W_LM_ENROLLMENT_ACC_SNP_F.)
PLP_Learning_Enrollment_Accumulated_Snapshot_PositionHierarchy_Update
(changes in the Employees Position Hierarchy due to regular or back
dated changes are applied to the Learning Snapshot Fact table)
PLP_LearningEnrollmentUpdate (This mapping is used to incrementally
update the initial waitlisted and enrollment initiated dates in the
Learning Enrollment Accumulative Snapshot Fact table,
W_LM_ENROLLMENT_ACC_SNP_F) 8.4.3.Learning Enrollment Event Fact
(W_LM_ENROLLMENT_EVENT_F) W_LM_ENROLLMENT_EVENT_F fact table stores
the status changes for the learning enrollment process. Its grain
is Learner/Employee - Learning Activity - Status. For example, an
employee requests, enrolls and completes a learning activity; there
will be 3 rows in this table, one for each of the statuses.
W_LM_ENROLLMENT_ACC_SNP_FW_LM_ENROLLMENT_EVENT_FPLP Load
processFULL and INCR Initial Load Sessions
PLP_LearningEnrollmentEventFact_Full(This mapping is used to create
the learning enrollment event Fact, W_LM_ENROLLMENT_EVENT_F)
Incremental Load Sessions PLP_LearningEnrollmentEventFact (This
mapping is used to insert/update rows into the learning enrollment
event Fact, W_LM_ENROLLMENT_EVENT_F) Implementing HR Analytics
using Universal Adaptors Oracle Corporation |56 8.5.Payroll Fact
Process 8.5.1.ETL Flow
PayrollFactCSVFileInputPayrollFactStagingW_PAYROLL_FSPayrollFactW_PAYROLL_FPayrollAggregateFactW_PAYROLL_AEmployeeDemographicAggregateDimensionW_EMP_DEMOGRAPHICS_DJobCategoryAggregateDimensionW_JOB_CATEGORY_DPayTypeGroupAggregateDimensionW_PAY_TYPE_GROUP_D
Terminology Pay Type describes various types of compensations or
deductions that typically come in a pay stub. Examples include
Earning, Bonus, and Taxes and so on. Pay Item Detail describes
whether the line item in the payroll fact is at a detail level
(like 401K deductions, Medicare deductions, Social Security
Deductions, Health Insurance Deductions etc) or if it is at a
higher level of a group (like DEDUCTIONS, or EARNINGS or TAXES and
so on). Implementing HR Analytics using Universal Adaptors Oracle
Corporation |57 Key Steps and Table Descriptions TablePrimary
SourcesGrainDescription W_PAYROLL_FSFlat file Source adaptors One
row per employee and pay period per pay type. This table stores
base payroll transactions W_PAYROLL_FW_PAYROLL_FSThe grain of this
table is typically at an Employee - Pay type - Pay Period Start
Date - Pay Period End Date level. For a given employee and pay
period, each record in this table stores the amount associated with
that pay type (line item). Stores the base Payroll Transactions.
Examples of fact information stored in this table include Pay Check
Date, Pay Item Amount, Currency Codes, and Exchange Rates and so
on. W_PAYROLL_A_TMPW_PAYROLL_FSame grain as of W_PAYROLL_F. This
temporary table is used to extract incremental changes that
happened on the base fact and used to drive the incremental
aggregate refresh. W_PAYROLL_AW_PAYROLL_F W_PAYROLL_A_TMP
W_EMP_DEMOGRAPHICS_D W_JOB_CATEGORY_D W_PAY_TYPE_GROUP_D The grain
of this table is at a Monthly level (Period Start and End Dates)
out of box (configurable though) and to the Employee Demographics,
Job Category, and Pay Type Groups aggregate dimension levels.
Stores Payroll transactions aggregated at a Monthly (configurable)
level on top of the base fact table W_PAYROLL_F Key
Setup/Configuration Steps All the set up and configuration steps
that are required for core Workforce also applies for Payroll (see
the same section for Workforce). The time grain (OOTB Monthly) of
the payroll aggregate table can be configured to become Weekly or
Quarterly or Yearly. Check the configuration steps for the
parameter $$GRAIN. Implementing HR Analytics using Universal
Adaptors Oracle Corporation |58 8.5.2.Payroll Fact (W_PAYROLL_F)
W_PAYROLL_F fact table stores the base Payroll Transactions.
Examples of fact information stored in this table include Pay Check
Date, Pay Item Amount, Currency Codes, and Exchange Rates and so
on. The grain of this table is typically at an Employee - Pay type
- Pay Period Start Date - Pay Period End Date level. For a given
employee and pay period, each record in this table stores the
amount associated with that pay type (line item).
PositionHierarchyPostChangeTemporaryW_POSITION_DH_POST_CHG_TMPPositionHierarchyUpdateProcessINCRONLYPositionHierarchyPreChangeTemporaryW_POSITION_DH_PRE_CHG_TMPPayrollFactStageW_PAYROLL_FSSILLoadProcessFULLandINCRPayrollFactW_PAYROLL_F
Initial Load Sessions SIL_PayrollFact_Full (This mapping is
responsible for loading fact records for the table W_PAYROLL_F
based on the corresponding staging area table) Incremental Load
Sessions SIL_PayrollFact (This mapping is responsible for loading
fact records for the table W_PAYROLL_F based on the corresponding
staging area table, since the last refresh date)
PLP_PayrollFact_PositionHierarchy_Update (changes in the Employees
Position Hierarchy due to regular or back dated changes are applied
to the Payroll Fact table) Implementing HR Analytics using
Universal Adaptors Oracle Corporation |59 8.5.3.Payroll Aggregate
Fact (W_PAYROLL_A) W_PAYROLL_A aggregate fact table stores Payroll
transactions aggregated at a Monthly level on top of the base fact
table W_PAYROLL_F. The grain of this table is at a Monthly level
(Period Start and End Dates) out of box (configurable though) and
to the Employee Demographics, Job Category, and Pay Type Groups
aggregate dimension levels.
PLPProcessINCRONLYPayrollFactW_PAYROLL_FPLPProcessFULLAndINCRPayrollAggregateFactW_PAYROLL_AEmployeeDemographicsAggregateDimensionW_EMP_DEMOGRAPHICS_DJobCategoryAggregateDimensionW_JOB_CATEGORY_DPayTypeGroupAggregateDimensionW_PAY_TYPE_GROUP_DPayrollAggregateTemporaryW_PAYROLL_A_TMPPLPProcessINCRONLY
Initial Load Sessions PLP_PayrollAggregate_Load_Full (Aggregates
all base Payroll transactions from W_PAYROLL_Fat the grain of the
designed Aggregate dimensions like Demographics, Job Category and
Pay type Group at a monthly level) Incremental Load Sessions
PLP_PayrollAggregate_Extract (Loads new or changed payroll base
transaction records from W_PAYROLL_F into a temporary table
W_PAYROLL_A_TMP. This mapping resolves the aggregate dimension keys
by looking up the aggregate dimensions. Also, based on the time
Implementing HR Analytics using Universal Adaptors Oracle
Corporation |60 granularity chosen, this mapping looks up the
correct time-bucket. With these two steps done, the final Payroll
Aggregate refresh becomes simpler) PLP_PayrollAggregate_Load
(Refreshes the Payroll Aggregate table W_PAYROLL_A driving from the
temporary table loaded in the prior step, W_PAYROLL_A_TMP. The
incremental refresh policy relies on the fact that for Payroll,
there can practically be no updates. It could well be an adjustment
run or a reversal run or likewise. The ITEM_AMT value in the base
payroll transaction w