As you may know, it is notoriously hard to build any custom reporting from JD Edwards data. With table names like “F1111,” Julian-style dates, and complex column mapping, there are real challenges in preparing JDE data for analysis. With this in mind, we have found the most effective data architecture is to reorganize the transactional data and create a data hub that is structured for high-performance reporting and predictive modeling.

In part one of this series, we will address the many challenges of designing an analytics system for JD Edwards, including the collection of JD Edwards data and integration of other data sources. In part two, we will cover how to add business logic unique to a company and host analyzable JD Edwards data.

JD Edwards Data 

Collecting JDE Data

As with any ETL solution, the first step is connecting to data in the JD Edwards backend. Two common challenges at this phase are dealing with the change data capture (CDC) aspect of the data flow and formatting the staged data for integration.

Dealing with the Change Data Capture (CDC) Aspect of the Data Flow

For the CDC challenge, you can use multiple approaches to efficiently extract JD Edwards data. Ideally, the system only pulls data from tables that were updated since the previous run. However, there are a few oddities in JD Edwards that make this difficult:

  • Two-column, Julian timestamping (columns aliased as UPMJ and TDAY)
  • Frequent use of batch scripts to move data internally
  • JD Edwards Xe AS/400 not accommodating CDC software

One common approach to solve these issues is to select the entire table via an ODBC connection and merge based on a hash key created in the staging layer. However, the better approach is to implement a CDC tool to push data based on table logs and extract rows in near real time.

Formatting the Staged Data for Integration

To format the staged data for integration, you need to accommodate the many idiosyncrasies of JD Edwards data. Using information pulled from a variety of metadata tables (e.g., F98711, F9860), you can not only move data into the BI system but also:

  • Convert to consistent string data types
  • Translate to ANSISQL standard date times
  • Remove whitespace (except for UDC codes where whitespace is important)
  • Add decimals to numeric columns
  • Index staged tables by their primary keys (as opposed to collecting them as heaps)

Just by modernizing the JD Edwards data using the steps outlined above, the information will already be easier to use for data integration.

Pro Tip: All of the required SQL code can be automatically generated using custom tools developed specifically for JD Edwards. By creating ETL packages programmatically, there is significant time savings in development and testing. (Shameless plug: Contact 2nd Watch to learn more about these custom ETL tools.)

Integrating with Other Data Sources

A major value proposition of building a BI platform is the ability to blend disparate data sources. Because business users frequently require a more holistic picture of the data to answer their business question, reporting on JD Edwards data alone is not sufficient. To that end, you should create a dedicated staging layer to prepare data for integration with other sources.

Example: 2nd Watch previously worked with a client that had multiple JDE instances (both Xe and 9.1), Salesforce, TM1, custom data flows, and a variety of flat files they wanted incorporated into executive reporting. By centralizing the collection point, the client could generate a single report based on multiple JD Edwards instances and had a single access point for quality assurance and SQL developers. (Note: Attunity is now part of Qlik.)

Integrating with Other Data Sources

If you’re looking for a strategic partner to help you get the most from your JDE data, contact us today for a JDE data assessment.

rss
Facebooktwitterlinkedinmail