How to Add Business Logic Unique to a Company and Host Analyzable JDE Data

In the first part of this series, A Step by Step Guide to Getting the Most from Your JD Edwards Data, we walked through the process of collecting JDE data and integrating it with other data sources. In this post, we will show you how to add business logic unique to a company and host analyzable JDE data.

Adding Business Logic Unique to a Company

When working with JD Edwards, you’ll likely spend the majority of your development time defining business logic and source-to-target mapping required to create an analyzable business layer. In other words, you’ll transform the confusing and cryptic JDE metadata into something usable. So, rather than working with columns like F03012.[AIAN8] or F0101.[ABALPH], the SQL code will transform the columns into business-friendly descriptions of the data. For example, here is a small subset of the customer pull from the unified JDE schema:

Adding Business Logic Unique to a Company
Furthermore, you can add information from other sources. For example, if a business wanted to include new customer information only stored in Salesforce, you can build the information into the new [Customer] table that exists as a subject area rather than a store of data from a specific source. Moreover, the new business layer can act as a “single source of the truth” or “operational data store” for each subject area of the organization’s structured data.

operational data store

Looking for Pre-built Modules?

2nd Watch has built out data marts for several subject areas. All tables are easily joined on natural keys, provide easy-to-interpret column names, and are “load-ready” to any visualization tool (e.g., Tableau, Power BI, Looker) or data application (e.g., machine learning, data warehouse, reporting services). Modules already developed include the following:

Account Master Accounts Receivable Backlog Balance Sheet Booking History
Budget Business Unit Cost Center Currency Rates Customer Date
Employee General Ledger Inventory Organization Product
Purchase Orders Sales History Tax Territory Vendor

Hosting Analyzable JDE Data

After creating the data hub, many companies prefer to warehouse their data in order to improve performance by time boxing tables, pre-aggregating important measures, and indexing based on frequently used queries. The data warehouse also provides dedicated resources to the reporting tool and splits the burden of the ETL and visualization workloads (both memory-intensive operations).

By design, because the business layer is load-ready, it’s relatively trivial to extract the dimensions and facts from the data hub and build a star-schema data warehouse. Using the case from above, the framework would simply capture the changed data from the previous run, generate any required keys, and update the corresponding dimension or fact table:

Hosting Analyzable JDE Data

Simple Star Schema

Evolving Approaches to JDE Analytics

This approach to analyzing JD Edwards data allows businesses to vary the BI tools they use to answer their questions (not just tools specialized for JDE) and change their approach as technology advances. 2nd Watch has implemented the JDE Analytics Framework both on premise and in a public cloud (Azure and AWS), as well as connected with a variety of analysis tools, including Cognos, Power BI, Tableau, and ML Studio. We have even created API access to the different subject areas in the data hub for custom applications. In other words, this analytics platform enables your internal developers to build new business applications, reports, and visualizations with your company’s data without having to know RPG, the JDE backend, or even SQL!

Evolving Approaches to JDE Analytics

High-level JDE Data Flow

Looking for more data and analytics insights? Download our eBook, “Advanced Data Insights: An End-to-End Guide for Digital Analytics Transformation.”

rss
Facebooktwitterlinkedinmail

How to Federate Amazon Redshift Access with Azure Active Directory

Single sign-on (SSO) is a tool that solves fundamental problems, especially in mid-size and large organizations with lots of users.

End users do not want to have to remember too many username and password combinations. IT administrators do not want to have to create and manage too many different login credentials across enterprise systems. It is a far more manageable and secure approach to federate access and authentication through a single identity provider (IdP).

As today’s enterprises rely on a wide range of cloud services and legacy systems, they have increasingly adopted SSO via an IdP as a best practice for IT management. All access and authentication essentially flow through the IdP wherever it is supported. Employees do not have to remember multiple usernames and passwords to access the tools they need to do their jobs. Just as important, IT teams prevent an administrative headache. They manage a single identity per user, which makes tasks like removing access when a person leaves the organization much simpler and less prone to error.

The same practice extends to AWS. As we see more customers migrate to the cloud platform, we hear a growing need for the ability to federate access to Amazon Redshift when they use it for their data warehouse needs.

Database administration used to be a more complex effort. Administrators had to figure out which groups a user belonged to, which objects a user or group were authorized to use, and other needs—in manual fashion. These user and group lists—and their permissions—were traditionally managed within the database itself, and there was often a lot of drift between the database and the company directory.

Amazon Redshift administrators face similar challenges if they opt to manage everything within Redshift itself. There is a better way, though. They can use an enterprise IdP to federate Redshift access, managing users and groups within the IdP and passing the credentials to Amazon Redshift at login.

We increasingly hear from our clients, “We use Azure Active Directory (AAD) for identity management—can we essentially bring it with us as our IdP to Amazon Redshift?”

They want to use AAD with Redshift the way they use it elsewhere, to manage their users and groups in a single place to reduce administrative complexity. With Redshift, specifically, they also want to be able to continue managing permissions for those groups in the data warehouse itself. The good news is you can do this and it can be very beneficial.

Without a solution like this, you would approach database administration in one of two alternative ways:

  1. You would provision and manage users using AWS Identity and Access Management (IAM). This means, however, you will have another identity provider to maintain—credentials, tokens, and the like—separate from an existing IdP like AAD.
  2. You would do all of this within Redshift itself, creating users (and their credentials) and groups and doing database-level management. But this creates similar challenges to legacy database management, and when you have thousands of users, it simply does not scale.

Learn more about our AWS expertise here.

rss
Facebooktwitterlinkedinmail

Completing Your Company’s Cloud Transformation with Azure Windows Virtual Desktop Foundations

The completion of your IT transformation from data center to full cloud adoption can often be hindered by your desktop administration.  While in the past virtual desktops have largely delivered on their promise to bring standardization, reduce the proliferation of applications and simplify desktop management, they have also had their share of challenges.  As an administrator you had few options to:

  • Create multiple VM pools with customized images for different user roles
  • Overload virtual machine images with more apps than needed and hide or block them from the user making the image bigger
  • Utilize dynamic app streaming which required additional infrastructure to be managed.

With Windows Virtual Desktop, Microsoft Azure has transformed virtual desktop delivery by completely separating the user profile data and application delivery from the Operating System to deliver a user experience that parallels that of a physical device, simplifies desktop administration further, and maintains the management of the underlying physical infrastructure.

Benefits of adopting Azure Windows Virtual Desktops (WVD):

  • Support for Windows 10 and Windows 7 virtual desktops – the only way to safely run Windows 7 after its End of Life (Jan. 14, 2020)
  • No need to overprovision hardware by aligning costs to business needs – transition from costly CAPEX hardware purchases to OPEX cloud consumption-based model
  • Simplify user administration by using Azure Active Directory (AAD) – leverage additional security controls like multifactor authentication (MFA) or conditional access
  • Highly secure with reverse connect technology – eliminates the need to open inbound ports to the VMs, and all user sessions are isolated in both single and multi-session environments
  • Utilize Microsoft Azure native services – Azure Files for file share and Azure NetApp Files for volume level backups

To help you with the transition from standard desktops or an existing on-premises RDS deployment to Microsoft Azure, 2nd Watch has developed Windows Virtual Desktop Foundations.  Windows Virtual Desktop Foundations provides you the blueprints necessary to set up the WVD environment, integrate with Azure native services, create a base Windows image, and train your team on how to create custom images.

With 2nd Watch Windows Virtual Desktop Foundations, you get:

  • Windows Virtual Desktop environment setup
  • Integration with Azure native services (AAD and AZ Storage for profiles)
  • Image build process set-up
  • A baseline custom Windows image
  • Team training on creating custom images
  • AZ Monitor setup for alerting

To learn more about our WVD Foundations, download our datasheet.

-Dusty Simoni, Sr Product Manager

rss
Facebooktwitterlinkedinmail

Azure Cloud Shell is a Hidden Gem

The simple way to describe Azure Cloud Shell is an on-demand Linux VM with a managed toolset that is accessible from virtually anywhere. You can access it via the Azure Portal, shell.azure.com, the Azure Mobile App, and Visual Studio Code. Pricing is simple. you only need to pay for storage that is used to persist your files between Cloud Shell sessions. Finally, Cloud Shell offers two shell experiences – Bash and PowerShell – however you can access PowerShell from Bash and Bash from PowerShell, so just choose whatever you are most comfortable with. 

Cloud Shell contains the following tools: 

  • Linux Tools– bash, zsh, sh, tmux, dig
  • Azure Tools– Azure CLI, AzCopy, Service Fabric CLI
  • Programming Languages– .NET Core, Go, Java, Node.js, PowerShell, Python
  • Editors– vim, nano, emacs, code
  • Source Control– git
  • Build Tools– make, maven, npm, pip
  • Containers– Docker CLI / Docker Machine, Kubectl, Helm, DC/OS CLI
  • Databases– MySQL client, PostgreSQL client, sqlcmd utility, mssql-scripter
  • Other– iPython Client, Cloud Foundry CLI, Terraform, Ansible, Chef InSpec

You are probably thinking to yourself, that’s great, but what can I use it for? Good question… 

Got a bunch of Azure management scripts that you have developed and need to be able to run? Cloud Shell is a great way to run and manage those scripts. You can leverage git for version control and run PowerShell, Bash, or Python scripts whenever and wherever you are. For example, you are grabbing some lunch and the boss sends you an email asking how many VMs are currently running in your environment and wants the answer right now. Being that this isn’t the first time that the boss has asked this question, you have already created a script that will send a report with how many VMs are currently running. So, you load the Azure Mobile App on your phone, connect to Cloud Shell to run the script and get back to your lunch without having to run back to the office. 

Are you an Azure CLI master? Cloud Shell has you covered! Cloud Shell always has the latest version of the Azure CLI without you ever having to maintain a VM or update your local installation. 

Need to deploy an agent to a bunch of VMs but don’t want to manage a Configuration Management tool? Once again, Cloud Shell has you covered. Use the built-in Ansible to run a playbook that deploys the agent you need installed. 

Do you run a multi-cloud shop? Need to deploy things to both Azure and AWS? Then you are in luck! With Cloud Shell you can use Terraform to deploy both Azure and AWS resources. Another multi-cloud idea would be to install the AWSPowerShell.NetCore PowerShell module to be able to perform day-to-day tasks and automation of AWS. 

There are some limitations of Cloud Shell, such as your Cloud Shell session being temporary. It will be recycled after your session is inactive after 20 minutes.  

The pricing for Azure Cloud Shell is great. Like I mentioned before, you only pay for storage. Storage is used to persist data between instances of Cloud Shell. If you install a PowerShell module or use git to clone a repo, the next time you fire up Cloud Shell, those files are still there. 

Azure Cloud Shell can help with a lot of different use cases and requires very little management. For more information on Azure Cloud Shell visit https://docs.microsoft.com/en-us/azure/cloud-shell/overview or for help getting started with Azure, contact us. 

-Russell Slater, Senior Cloud Consultant

rss
Facebooktwitterlinkedinmail

Managing Azure Cloud Governance with Resource Policies

I love an all you can eat buffet. One can get a ton of value from a lot to choose from, and you can eat as much as you want or not, for a fixed price.

In the same regards, I love the freedom and vast array of technologies that the cloud allows you. A technological all you can eat buffet, if you will. However, there is no fixed price when it comes to the cloud. You pay for every resource! And as you can imagine, it can become quite costly if you are not mindful.

So, how do organizations govern and ensure that their cloud spend is managed efficiently? Well, in Microsoft’s Azure cloud you can mitigate this issue using Azure resource policies.

Azure resource policies allow you to define what, where or how resources are provisioned, thus allowing an organization to set restrictions and enable some granular control over their cloud spend.

Azure resource policies allow an organization to control things like:

  • Where resources are deployed – Azure has more than 20 regions all over the world. Resource policies can dictate what regions their deployments should remain within.
  • Virtual Machine SKUs – Resource policies can define only the VM sizes that the organization allows.
  • Azure resources – Resource policies can define the specific resources that are within an organization’s supportable technologies and restrict others that are outside the standards. For instance, your organization supports SQL and Oracle databases but not Cosmos or MySQL, resource policies can enforce these standards.
  • OS types – Resource policies can define which OS flavors and versions are deployable in an organization’s environment. No longer support Windows Server 2008, or want to limit the Linux distros to a small handful? Resource policies can assist.

Azure resource policies are applied at the resource group or the subscription level. This allows granular control of the policy assignments. For instance, in a non-prod subscription you may want to allow non-standard and non-supported resources to allow the development teams the ability to test and vet new technologies, without hampering innovation. But in a production environment standards and supportability are of the utmost importance, and deployments should be highly controlled. Policies can also be excluded from a scope. For instance, an application that requires a non-standard resource can be excluded at the resource level from the subscription policy to allow the exception.

A number of pre-defined Azure resource policies are available for your use, including:

  • Allowed locations – Used to enforce geo-location requirements by restricting which regions resources can be deployed in.
  • Allowed virtual machine SKUs – Restricts the virtual machines sizes/ SKUs that can be deployed to a predefined set of SKUs. Useful for controlling costs of virtual machine resources.
  • Enforce tag and its value – Requires resources to be tagged. This is useful for tracking resource costs for purposes of department chargebacks.
  • Not allowed resource types – Identifies resource types that cannot be deployed. For example, you may want to prevent a costly HDInsight cluster deployment if you know your group would never need it.

Azure also allows custom resource policies when you need some restriction not defined in a custom policy. A policy definition is described using JSON and includes a policy rule.

This JSON example denies a storage account from being created without blob encryption being enabled:

{
 
"if": {
 
"allOf": [
 
{
 
"field": "type",
 
"equals": "Microsoft.Storage/ storageAccounts"
 
},
 
{
 
"field": "Microsoft.Storage/ storageAccounts/ enableBlobEncryption",
 
"equals": "false"
 
}
 
]
 
},
 
"then": { "effect": "deny"
 
}
 
}

The use of Azure Resource Policies can go a long way in assisting you to ensure that your organization’s Azure deployments meet your governance and compliance goals. For more information on Azure Resource Policies visit https://docs.microsoft.com/en-us/azure/azure-policy/azure-policy-introduction.

For help in getting started with Azure resource policies, contact us.

-David Muxo, Sr Cloud Consultant

rss
Facebooktwitterlinkedinmail