Quantcast
Channel: Shannon's JD Edwards CNC Blog
Viewing all 542 articles
Browse latest View live

To keep a modification or not–that be the question

$
0
0

The cost of a modification grows and grows.  If you look at your modifications, especially if you are modifying core objects – retrofit is going to continue to cost you money going forward.

How can you work out how often your modified code (or custom code) for that matter is being used?

One method is to use object identification, but this is only part of the story.

You’ll see below that ERP analytics is able to provide you things like number of session, number of unique users, Average time on page and total time on page for each of your JD Edwards applications.  This can be based on application, form or version – which can assist you find out more.

With this information, you can see how often your modifications are used, and for how long and make a call on whether they are worth their metal.


image

Our reporting suite allows you to choose date ranges and also system codes to further refine the analysis.

image


You are then able to slice and dice your mods (note that we can determine modified objects too, but this is using data blending with data studio) to give you a complete picture:

image


Of course, we can augment this list with batch and then calculate secondary objects from cross reference to begin to build the complete picture.  You want to narrow down both retrofit and testing if you can.


image


See below for how we look at queue concurrency and wait times to work out job scheduling opportunities and efficiencies.

image


Technical Debt - again

$
0
0
In my last post I showed you some advanced reporting over ERP analytics data to understand what applications are being used, how often they are being used and who is using them.  This is the start of understanding your JD Edwards modifications and therefore technical debt.

At Fusion5 we are doing lots of upgrades all of the time, so we need to understand our clients technical debt.  We strive to make every upgrade more cost efficient and easier.  This is easier said than done, but let me mention a couple of ways which we do this:

Intelligent and consistent use of category codes for objects.  One of the code is specifically about retrofit and needs to be completed when the object is created.  This is "retrofit needed" - sounds simple I know.  But, if you create something bespoke - that never needs to be retrofitted - the best thing you can do it mark it like that.  Therefore lots of time will be saved looking at this object in the future (again and again).

Replace modifications with configuration.  UDO's have made this better and easier and continue to do so.  If you are retrofitting and you think - hey - I could do this with a UDO - please do yourself a favour and configure a UDO and don't touch the code!  Security is also an important concept for developers to understand completely.  Because - guess what?  You can use security to force people to enter something into the QBE line - you don't need to use code.  (Application Query Security)



  1. Everyone needs to understand UDO's well.  We all have a role in simplification.
If you don't know what EVERY one of these are - you need to know!

OCM's can be used for force keyed queries.  Wow!!!  Did you know that you can create a specific OCM that forces people to only use keyed fields for QBE - awesome.  So simple.  I know that there is code out there that enforces this.   This is like the above tip for security.



System enhancement knowledge.  This is harder (takes time), but knowledge of how modules are enhanced over time is going to hopefully retire some custom code.  Oracle do a great job of giving us the power to find this, you just need to know where to look:



Compare releases


Calculate the financial impact.  Once you know all of this, you can start to use a calculator like fusion5 have developed, this is going to assist you understand your technical debt and do research around it.  We have developed a comprehensive suite of reports that allow you to slice and dice your modification data and understand what modifications are going to cost you money and which ones will not.  Here are a couple of screen grabs.  All we need to create your personalised and interactive dashboard is the results of a couple of SQL statements that we provide (or you run our agent - though ppl don't like running agents).


You can see that I have selected 5 system codes and I can see how much the worst case and best case estimates for the retrofit of those 5 system codes is.  I can see how often the apps are used and therefore make an appropriate finance based decision on whether that should be kept or not.  You are able to see the cost estimates by object type, system code and more.  Everything can also be downloaded for excel analysis.







Using AI and image recognition with JD Edwards

$
0
0

Introduction

This blog post is hopefully going to demonstrate how Fusion5 (quite specifically William in my team) have been able to exploit some really cool AI cloud constructs and link them in with JD Edwards.  We’ve been looking around for a while for use cases for proper AI and JD Edwards, and we think that we have something pretty cool.

I want to also point out that a lot of people are claiming AI, when what they are doing is not AI.  I think that true AI is able to do evaluations (calculations) based upon a set of parameters that it has not necessarily seen before.  It's not comparing the current images to 1000000's of other images, it has been trained on MANY other images and in it's internals it has the ability to apply that reference logic.  That model that has been built from all of it's training can be run offline, it's essentially autonomous - this is a critical element in the understanding of AI.


We are using JD Edwards orchestration to call out to an external web service that we’ve written (web hook).  This web hook has been programmed to call a number of different AI models to interpret images that have been attached to JD Edwards data.  So, if you use generic media object attachments – this mechanism can be used to interpret what is actually in those images.  This can greatly increase the ability for a JD Edwards customer to react to situations that need it.


For example if you used JD Edwards for health and safety incidents and you wanted some additional help in making sure that the images being attached did not contain certain critical objects – and perhaps if they do, you’d raise the severity or send a message based upon the results… You could also analyse frames of video with the same logic and detect certain objects.


We’ve decided to test the cloud and are using different models for our object detection.  we are using google, Microsoft and AWS to see if they are better or worse at object detection.

Object detection vs. Object Recognition

Note that there is a large difference between object detection and object recognition. – stolen from https://dsp.stackexchange.com/questions/12940/object-detection-versus-object-recognition
Object Recognition: which object is depicted in the image?

  • input: an image containing unknown object(s)
    Possibly, the position of the object can be marked in the input, or the input might be only a clear image of (not-occluded) object.
  • output: position(s) and label(s) (names) of the objects in the image
    The positions of objects are either acquired form the input, or determined based on the input image.
    When labelling objects, there is usually a set of categories/labels which the system "knows" and between which the system can differentiate (e.g. object is either dog, car, horse, cow or bird).
Object detection: where is this object in the image?
  • input: a clear image of an object, or some kind of model of an object (e.g. duck) and an image (possibly) containing the object of interest
  • output: position, or a bounding box of the input object if it exists in the image (e.g. the duck is in the upper left corner of the image
We’ve spent time training our algorithms to look for certain objects in images, so using object recognition.  We’ve trained 3 separate algorithms with the same 200 training images (we know that this is tiny, but the results are surprising!)


My PowerPoint skills are really being shown off in the above diagram of what has been done.

At this stage we’ve used a couple of the major public cloud providers and have created our own models that are specifically designed to detect objects that we are interested in, namely trains, graffiti and syringes.  This is quite topical in a public safety environment.
We’ve created an orchestration and a connector that are able to interrogate JDE and send the various attachments to the AI models and have some verification of what is actually in the images.  Note that this could easily be put into a schedule or a notification to ensure that this was being run for any new images that are uploaded to our system.

Testing

Let’s scroll deep into google images for train graffiti.  The reason I scroll deep is that these algorithms were trained on 70 pics of trains and 70 pics of graffiti and also 40 pics of syringes.  I want to ensure that I'm showing the algorithm something that it has never seen before.


And attach this to an address book entry in JD Edwards as a URL type attachment.



In this instance we are using the above parameters. 300 as the AN8 for ABGT and only want type 5’s.


William has written an orchestration which can run through the media objects (F00165) for ANY attachments.  We're currently processing image attachments, but really - this could be anything.

To call our custom orchestration, our input JSON looks like this:

{
   "inputs" : [ {
     "name" : "Object Name 1",
     "value" : "ABGT"
   }, {
     "name" : "Generic Text Key 1",
     "value" : "300"
   }, {
     "name" : "MO Type 1",
     "value" : "5"
   }, {
     "name" : "Provider",
     "value" : "customVision"
   }, {
     "name" : "ModelName",
     "value" : ""
   } ]
}
The provider in this instance is a model that we have trained in customVision in Azure.  We've trained this model with the same images as Google, and also our completely custom model.

Let’s run it and see what it thinks.  Remember that the orchestration is calling an Service Request which is essentially running a Lambda function though a connection.  There is some basic authentication in the header to ensure that the right people are calling it.  The Lambda function is capable of extending HOW it interprets the photo, how many different AI engines and models that it will return.
{
   "Data Requests" : [ {
     "Data Browser - F00165 [Media Objects storage]" : [ {
       "Media Object Sequence Number" : "1",
       "GT File Name" : "
https://c1.staticflickr.com/6/5206/5350489683_e7cdca43ba_b.jpg"
     } ]
   } ],
   "Connectors" : [ {
     "graffiti" : 0.9162581,
     "train" : 0.599198341,
     "syringe" : 0.00253078272
   } ]
}
Not too bad, it’s 91% sure that there is graffiti, 60% sure that there is a train and pretty sure that there are no syringes.  Not too bad, let’s try google now
A simple change to the provider parameter allows us to use google next.  Note I did have some issues with my screen shots, so that might reference some different pictures, but revealed these results.

{
   "inputs" : [ {
     "name" : "Object Name 1",
     "value" : "ABGT"
   }, {
     "name" : "Generic Text Key 1",
     "value" : "300"
   }, {
     "name" : "MO Type 1",
     "value" : "5"
   }, {
     "name" : "Provider",
     "value" : "autoML"
   }, {
     "name" : "ModelName",
     "value" : ""
   } ]
}
Results
{
   "Data Requests" : [ {
     "Data Browser - F00165 [Media Objects storage]" : [ {
       "Media Object Sequence Number" : "1",
       "GT File Name" : "
https://c1.staticflickr.com/6/5206/5350489683_e7cdca43ba_b.jpg"
     } ]
   } ],
   "Connectors" : [ {
     "graffiti" : 0.9999526739120483,
     "train" : 0.8213397860527039
   } ]
}
Similar, it thinks that there is 99.99% chance of graffiti and 82% chance of a train – more certain than Microsoft.


Finally, let’s try a hosted model that we are running on google cloud:
We drive that with the following parameters

{
   "inputs" : [ {
     "name" : "Object Name 1",
     "value" : "ABGT"
   }, {
     "name" : "Generic Text Key 1",
     "value" : "300"
   }, {
     "name" : "MO Type 1",
     "value" : "5"
   }, {
     "name" : "Provider",
     "value" : "customModel"
   }, {
     "name" : "ModelName",
     "value" : "multi_label_train_syringe_graffiti"
   } ]
}
And the output is:
{
   "Data Requests" : [ {
     "Data Browser - F00165 [Media Objects storage]" : [ {
       "Media Object Sequence Number" : "1",
       "GT File Name" : "
https://c1.staticflickr.com/6/5206/5350489683_e7cdca43ba_b.jpg"
     } ]
   } ],
   "Connectors" : [ {
     "graffiti" : 0.9984090924263,
     "syringe" : 7.345536141656339E-4,
     "train" : 0.9948076605796814
   } ]
}

So it’s very certain there is a train and graffiti, but very certain there is no syringe.

What does this mean?

We are able to do some advanced image recognition over native JD Edwards attachments using some pretty cool cloud constructs.  We’ve trained these models with limited data, and have some great results.  Although we should really try some images without trains or graffiti (trust me, this does also work).  We are paying a fraction of a cent for some massive compute to be able to load our models and process our specific AI needs.

You could be on premise or in the cloud and STILL use all of these techniques to understand your non structured data better.  This is all done with a single orchestration.


Fusion5 have the ability to create custom models for you and find actionable insights that you are interested in and ensure that this information is ALERTING users to ACT.

What does out AI algorithm think of me?

As you know, it's been trained (excuse the put) to look for trains, graffiti and syringes.  What if I pass in a picture of me?

Image result for "shannon moir"


Let’s process this for a joke: "https://fusion5.com.au/media/1303/shannonmoir-300.jpg"
{
   "Data Requests" : [ {
     "Data Browser - F00165 [Media Objects storage]" : [ {
       "Media Object Sequence Number" : "1",
       "GT File Name" : "
https://c1.staticflickr.com/6/5206/5350489683_e7cdca43ba_b.jpg"
     }, {
       "Media Object Sequence Number" : "2",
       "GT File Name" : "
https://fusion5.com.au/media/1303/shannonmoir-300.jpg"
     } ]
   } ],
   "Connectors" : [ {
     "graffiti" : 0.9162581,
     "train" : 0.599198341,
     "syringe" : 0.00253078272
   }, {
     "graffiti" : 0.100777447,
     "syringe" : 0.006329027,
     "train" : 0.00221525226

   } ]
}
Above is an example of processing a picture of “yours truly”, to see what Azure thinks…
10% chance graffiti and no syringe or train…  not bad…

Great, so if this was attached to an incident in JDE, you might want to still raise the priority of the case, but not becasue there is graffiti or a train!

What’s next

We (Innovation team at Fusion5) are going to integrate this into JD Edwards and increase the priority of incidents based upon what is seen in the images.  

These algorithms can be trained to look for anything we want in images and we can automatically react to situations without human intervention.  

Another very simple extension of what you see here is using AI to rip all of the text out of an image, OCR if you like.  It’d be super simple to look through ALL images and convert them to text attachments or searchable text.


Imagine that you wanted to verify a label or an ID stamp on an object, this could all be done through AI very simply!


Orchestration enhancements - SFTP

$
0
0
We hear about enhancements in orchestration studio, and sometimes just …yawn… because we think there is not much in it.  I’ve been looking into 9.2.2.4 (I know, behind the times) and the functionality is great.
Below is a sample connector defined to a free ftp server (that did not work, keep reading)…  but this is available from the tools menu in orchestration studio. 
That’s pretty simple.  Shame I cannot use key’s for authentication, that might be coming in subsequent releases?
But, when you go to use this connection in a service request, there is some magic.  You have a bunch of options (not just put and get) for your file IO needs. 
That is really neat.  You can get files easy enough and use variables for filenames, as you can see from below, just use the ${variable} syntax and they can be passed into the service request.
This is really functional (and unexpected), that you can get the native output from a UBE (or CSV or OSA).  This is a native get.  Also get the BIP output

Note that this will grab the output of the UBE and FTP the file in one fell swoop .  Then schedule it using the AIS scheduler .  Imagine what might have been lots of modifications and scripts is now a single operation orchestration studio that can be completed by a simpleton like me.   Once again, an amazing use case of UDO’s and orchestration working hard to reduce clients technical debt.  We all just need to start the process of converting the old ways we do things.

I tried a couple of free FTP servers on the internet to get this working, with various success.  The rebex.net one above had a problem negotiating security algorithms.  You cannot pass options into the FTP command with the interface that has been provided, so you might need to ensure that they play nice.  Of course there are work arounds to add items to the java.security files for the JRE that is running AIS, but it might be easier to change the server (or not).

I ended up being able to test everything with the following connection details:
goto this link to find the password, they need some credit for providing this! https://www.wftpserver.com/onlinedemo.htm 

You can upload and download, which is cool.  I got both of them working immediately (thanks JDE)
I then had a crack at the GetUBEOutput, which I thought was a bit of a joke initially, but it’s really good.
As I stated before, 1 SR, 1 schedule and 1 orchestration could launch a job then send the output from the job to a remote SFTP server.  It’s that easy.  How many mods are we going to save with this alone?
The return JSON is nice and informative too:
{
  "Connectors" : [ {
    "reportName" : "R0004C",
    "reportVersion" : "XJDE0001",
    "jobNumber" : 24,
    "executionServer" : "F5ENT",
    "jobStatus" : "D",
    "objectType" : "UBE",
    "user" : "SM00001",
    "environment" : "JPLAY920",
    "submitDate" : "20181030",
    "lastDate" : "20181030",
    "submitTime" : "174358",
    "lastTime" : "174400",
    "oid" : "R0004C_XJDE0001",
    "queueName" : "QBATCH",
    "fileName" : "/upload/R0004C_XJDE0001_24_PDF.pdf"
  } ]
}
In summary, the SFTP worked out of the box first time.  Created a connection, created an SR – wrapped the SR in an orchestration and boom!  I was able to sftp files from anywhere to anywhere.  Making this parameter driven is easy and then submitting the same to cURL is also easy – so that you could EASILY call this orchestration from a BSFN (see B98ORCH) you probably need 9.2.3 for this.
jdeOrchestrationManager *orchMgr = (jdeOrchestrationManager *)jdeOrchestrationManagerInit(lpBhvrCom);
The magic exists in the function pointer above





Fraud detection in JDE using AI and orchestration

$
0
0
Okay, I’ve posted about it a number of times, but we all just need to admit it – orchestration is cool for JD Edwards.  Yeah, AIS was good, but it’s just got better and better.  This really is a challenge for us (the community that thrives on JD Edwards) to extend the use of JD Edwards beyond the traditional boundaries.  I’d like to challenge people to look outside of the square when they are solving business problems.   I’m going to step you through and really simple example of big gains from small investments using AI, Cloud and orchestration.

I do tend to talk a little more technical than most, unashamedly to be honest.  What I’m promoting in this post is to get the nitty gritty of web-hooks and API’s.  Understand that this is all really easy technology, easy to find and easy to implement.  Let’s say that you wanted a super simple solution that ripped the text of ANY attachment in JDE – no matter if it was a picture, PDF or anything.  You could then create a simple table in JDE and store the text in some sort of text search field (this can be implemented in so many ways, generally triggers over tables  etc etc).  This table could index the text in EVERY attachment.  Therefore every scanned PO, WO, every PDF attachment, every special instruction, every hand written note that was a photo could be converted to the TEXT value and be made searchable.  WOW, that would be great. 

I can only imagine finding a vendors part number or special notes of a PO from a couple of years ago could really save your bacon at some stage.  Being able to search attachments to work orders for serial numbers would be amazing for so many clients.  What if I said that something like this could be cobbled together < 1 week?  Treating your ERP like a searchable DMS – wow!

What if you were to look at an amazing extension of this idea to prevent fraud (I have to thank a good contributor Matthew S for this extension).  What if I took a hash of the resulting text from the scan of every document and looked for duplicate invoices (or fraud).  What if I could search for duplicate anything (let’s not get right into locality sensitive hashing https://en.wikipedia.org/wiki/Locality-sensitive_hashing), but there are some really amazing things that you could do.  This would be a complete bolt on solution that uses orchestration and some “APIs”.  We will talk about said API’s later.

Fusion5 have gone a long way to make this a reality.  We’ve created public facing API’s that are capable of many different interactions with a media object, for example:

  • Custom AI algorithm to recognize particular objects that you have trained the algorithm with
  • Character recognition algorithm that can turn something like this:
Into this  (My mind boggles that I can do such deep analysis of a photo like this.)

700N $99 TCOME E $45 $66 EACH 57 20% 20 PARTICIPATING BEER, PRE-MIX AND CIDER OFF 1 LITRE Vad Cruiser Captain Morgan Spiced Gold Rum, Smirnoff Red Vodka OR Canadian Club 700mi Whole team over? Covered. Baron Samedi Spiced Rum OR Jack Daniel's Old No 7 700ml HAHN Bombay Sapphire Gin 1 Litre 2 FOR BUY 2 OR MORE CASES OR 10 $28 CASE CASES OR 10 PACKS $41 CASE SA $10 EACH Coron "Extra SEST SERVED OVER VI Dew MAVE UP TO CASE DE PORTO ΚXXX XX GOLD GOLD CERVER MA BOURBON HAHN Heineken WHISKEY MIXED WITH PREMIUM CIDE MERCURY 4 LITRES PERON STRO AZZU Aceand hotely CERVECERIA MODELO, S.A MEXICO, D.F. PREMIUM Cola KOPPARBERI 'BUILT TO LAST 22. SIZ ITU TELS ASTUNDA perDry EST RUM 18 PREMIUM Scotch 700m De Bortoll Premium 4 Litre Casks Excludes fortified Apple IRON JACK 6.3% LARD CIDEI ORIGINAL XXXX Gold Bottles or Cans 24x375ml 24 PACK 6.9% ALC/VOL & COLA 190 mL 45% ALCNVOL HISP AUSTRALIAN LAGER EST 1911 HODART, TAS Heineken Premium Lager Bottles 24x330ml OR Coopers Premium Lager Bottles 24x355ml SUPER Peroni Nastro Azzurro Bottles 24x330ml 10 CANS 24 PACK SUPER SAVER S SUPER SAVER 10 CANS CANS SUPER SAVER SUPER SAVER EACH EACH SA EACH EACH EACH S BREW OOPERS HRB WERY TOOHEYS VODKA COOPERS PURE CI SAD BLONDE Um Low Carb Logo LOW CARU DRY VELVE ARK AL WOODSTOCK CRUISER BOURBON ORCHARD AND COLA THIEVES RASPBERRY Special - FACE GAMUN BOKLESS CARBOHYDRATES Houghton Classic Shingleback Red Knot AUS Twelve New Zealand Pinot Noir CLEAN CRISP TASTE APPLE CIDER Hardys HRB 2750 46% ALCANO Mumm Cordon Rouge Champagne NV 24 6.0 PACK 30 CANS IN-STORE @ your local BWS PICK UP Shop online. Collect instore. WITH YOUR GROCERIES @ woolworths.com.au/bws SIMPLY SHOP, SCAN AND SAVE with Woolworths Rewards at BWS rewards 17" We support the responsible service of alcohol. Available in SA from Wednesday 11 April until Tuesday 17 April 2018 unless sold out prior Savings are based on offers apply to the quantity advertised only. Limit rights reserved. Specials may not be available in all stores including Alice Springs. "Standard local call charges y by store. Wine is 750ml unless otherwise stated. At this great price no further discounts apply Casks not available in Adelaide City or Rundle Mall. See www.woolworthsrewards.com.au for terms and conditions Selected cases may not be available in all stores. WC110418/SA Page 35

  • Standard object detection
  • Landmark detection

We’ve embodied this in a simple to use GUI, to allow for interactive discovery (before plugging it in).

So all of these options can be plugged into ANY analysis in JD Edwards and scheduled using orchestration.  Extracting more value from the plain old photos.

In action:

My use case is is simply to find duplicate invoices, okay – this is going to be easy.

  1. I create my custom table (F55DUPINV) and view combo in my “database of choice” that is going to hold the text retrieved out of any attachment.  Note that there is going to be smarts here, because the text will be long.
  2. I create an orchestration that takes a parameter of MO data structure name and calls my API (webhook) via a connection.  This retrieves the text and then inserts this into a JDE form (or via a database connector if too big)
  3. This orchestration will also hash the text to a unique value for quick uniqueness checks.
  4. I can then have another scheduled orchestration that looks for new duplicates (or fuzzy logic like) and sends an email to the fraud officer of the instance, with a link to the actual transactions.

4 steps which can extract the text out of every attachment in JDE, compare them and automate the delivery of exceptions to your business rules.

I have not changed a single line of standard code, I’ve not contributed to my technical debt.  I’ve got actionable results from the implementation of a couple of nice orchestrations, some smart database features and some scheduling of orchestrations with notifications.

Although similar to a previous post, this shows real transactions and data going into JD Edwards in a different way.  I guess the perspective is more business focused.

Fusion5 are in the API & orchestration space.  If you’d like to trial some of this technology at your site – please do not hesitate to reach out.  We can get demo’s like this running in no time and allow you to harness the power of orchestrations at your organisation to extract real business value.  We can provide you a key to use our API’s and orchestrations that are ready to go.


IoT in action

$
0
0
JDE IoT is easy, let's look at a use case and see how you can implement this solution.

Firstly, if you don't know how IoT can help you – think of something that you measure manually and let's automate that.

So, if you need to go and measure the temperature in a control room, then why not use a sensor to do this.  Then you can have that measured all of the time.  What about water quality?  What about lidar to measure the height of a stack of coal?  No worries!

At Fusion5, we've actually been through a process of designing this complete solution.

We have been using particle io boards to send sensor data to JD Edwards, but there is some cool tech in the middle.


We are using the particle 3G boards, like the boron below


but also dabbling with the latest mesh modules that they have released.  These allow us to have less 3G, but more sensors.

This is allowing us to produce any device, using a myriad of sensors and being able to react to the data immediately.



The build at the moment is above, with a remote temp sensor and also a water proof and dust proof housing

Below is a set of devices for a PoC we've completed here in Australia.  We have a couple of production POC's working with our JD Edwards clients.



These devices are waking up and sending temperature data & humidity data every 15 minutes to our particle cloud.  We are using particle as the middleman, as the particle complete device solution (including data plan) is perfect.  Added value is that if the power is disconnected, this also triggers an event that you can use to raise a work order, let's be honest – if the power goes, this is not good.  The device has a battery which allows 2 weeks of disconnected data to be sent.

We are using 3G, as it does not create any holes in the client's networks.  3G also ensures that if the power goes out, we can still send the sensor data out to the cloud and react.  Note that mesh devices can be connected to the 3G board, which extends its range without additional 3G data plans.


Simply, the diagram look like the above, but we need more layers to be future proof, there also needs to be some explanation to the logic layer.  You can see that all of the items are standard and the JDE orchestrations are configuration not code.  The solution is implemented without increasing your technical debt.

The above shows a little more detail of where we are adding strategic value, how this is an enterprise solution -  not a tactical one.  We use JDE for what it is good for – raising work orders or storing central data – it's not good for storing all of the trend data.  We use cheap cloud storage (2.3 c per GB per month, or a massive 28 cents a year).  Even with readings every 15 seconds and saying 256 bytes of data per read [date, time, GPS, temp, humidity], we are only going to store 24x60x4 - 5760 readings per day or 1.4 MB a day or 511MB a year per device.  This means that you can store the data from 200 devices a year for about 28 dollars a year.  You don't want to add 100GB to your JDE database do you?

You can see there are a number of items on this diagram.  We are using public cloud (in many cases AWS lambda) in to run logic over the top of the data, we do this as an initial triage – as we don't want to call orchestrations every time we get a meter reading (read the numbers above).

The data is interpreted in this cloud logic layer, which then can determine whether a business rule has been breached and call and orchestration accordingly.  Once again, the cloud costs of lambda are tiny and this is a cheap solution to run at scale.

Note that the values (actual breach values - high and low temperatures) ARE stored in JDE.  So there is a single place to go to maintain the threshold data.  The lambda functions query this on a scheduled basis – keeping everything fresh and efficient.

You'll also see mention of campfire in the diagram, this is a Fusion5 integration solution for enabling on premise software to be connected to the cloud.  It's our super light middleware, that is no fuss and build on AWS highly available constructs.  Highly available and disaster recoverable integration solution in the cloud!

All of the measurement data is being stored in a bucket (whether this is azure, google cloud storage or s3) to enable training AI down the track.  Imagine pointing quicksight at your data, and extracting insights.

This is a complete solution that can scale to any number of devices.  It delivers TRUE real-time ERP functionality – raising work orders when things get too hot – but also allows for future prediction analysis by providing the training data for AI.  This AI can deliver true actionable insights back into JDE and perhaps raise work orders before something actually gets too hot (based upon all of the training data).

This is all decoupled from the ERP.  This is all getting clients cloud ready.  BUT, JDE is doing all of the things that it is great at!

We have a self service web portal that users can view any of the configured IoT devices:

This can easily be hosted as a cafe1 window and accessible from JDE


You can see from above, that all of the information is at your fingertips from JDE, perfect.


We can drill down to the history too


This is all enabled with a simple orchestration and cafe1 pages. 

You also don't need JDE, we have the capability to call "webhooks" that can really go to anywhere, but also use campfire to call bespoke and on premise functionality.

Fusion5 and the innovation team are plugging in IoT and also the ability to embrace AI into our ERP.  This implementation means clients are not making large capital investments, and you are able to prototype these quickly and easily.

Devices are costing $150 each, data subscriptions are around $5 a month and then cloud costs (logic and website hosting) are costing another $5 a month.  Sure, the consulting might cost something, but you can see that the ongoing investment is minimal!

If you want to plug IoT into your JDE in a strategic way, Fusion5 would love to help!

Average JDE web performance - literally

$
0
0
Are you fast or slow?  How can you tell if there is a little bit more that you can possibly extract out of your ERP?  Performance is generally a subjective thing, you lull your users into a sense of good performance and bad performance based purely on precedence.  <!--[if !mso]><![endif]

I'd like to see this analysis extended to an objective analysis, and this is precisely what we have done for JD Edwards using ERP analytics.

We deploy Google Analytics as part of the tools release.  This then sends a tiny amount of javascript to the client to execute on certain HTML events in the client browser, for EVERY page for EVERY user.  This does sound like a lot, but we've done a significant amount of performance testing with this and have noticed an immaterial different in performance and traffic.

We have this solution running for about 50 clients in Australia and around the world.  Our clients are able to benchmark performance AND productivity, then measure any delta's caused by change.

So I thought that I would reveal some pretty cool findings.  We've analysed over 32,000,000 page loads and over 1,140,000 logins (sessions) for the table below.

This is the last 100 days of data for approximately 26 active JDE clients, all anonymous of course.

I'm graphing 2 metrics here:

Avg Page Load Time: The average amount of time (in seconds) it takes that page to load, from initiation of the pageview (e.g., click on a page link) to load completion in the browser. 

Avg. Page Download Time : The time to download your page

Personally this is pretty impressive.  On average JDE users are accessing 29 pages per session.  That is pretty interesting.  

On average the page download time is .05 seconds - tiny

On average the time it takes to load a page is .86 of a second - pretty good too, on average we have a sub second response time for JDE as a ERP.

If your site is taking longer than that, you are not reaching your potential.  There are some pretty quick fixes for a lot of these issues, which will ensure that you are getting the most out of your hardware and users.




Orchestration client, clear cache and architectural options for high integration

$
0
0
This is a quick and technical post about cache and AIS servers.

One of my clients was complaining that then they used orchestration client (to call an orchestration) - it always picked up the new changes, but when they used postman (to call the same orchestration) - the changes were never in the resulting payload.  I (of course) stated that this was impossible and that the calls from postman do exactly the same thing as orchestration client and my customer must have been wrong...

But, it seems that there might be a large amount of egg on my face, for two reasons.

  1. Architecture
  2. cache clearing

Architecture

When we set up orchestration studio and AIS, Fusion5 generally create a couple of instances of each.  The reasons for this are that we have a bunch of mobile applications using AIS, so we want them to come into their own AIS instance, then then points to it's own dedicated HTML server.  Makes sense?  Yeah!  This is also complicated by the fact that we have a custom domain which points to an AWS ELB which points to a nginX server, which proxies the actual traffic!  YAY.  IT is actually a really nice setup. 

We do the above so that we don't have funny ports in our URLs and we can do HTTPS offload.  Some simple nginx commands can serve up all of the environments from a single server on 443 or 80 - this is really nice and neat.  If you have crazy ports in your environments - you need to stop - browsers are shutting this down pretty quick.

Oh, the OTHER AIS instance pair is the HTML server that all normal users log into, and of course this needs an AIS server now for all of the watch lists and other items.  So we create these separately.  This IS the instance pair that we hook up to orchestration too.  See my confusing diagram below. 

So orch needs an AIS server, which points to an HTML server which points to an AIS server (I personally think that this is stoopid).  You get double connections on all of your HTML servers now because of AIS connections.  So you need to double your "maximum  users" setting in JAS.INI because of all the extra sessions...

Anyway, I'll stop complaining and get back to the issue...  

I think you can tell from my explanation above, that the external postman link was pointing to 1 AIS server (9062) and orchestration studio was pointing to another (9052).  This was a fundamental architectural reason for the cache not being cleared on the AIS server, but this was not the only reason.

Diagram showing multiple servers and their purpose and relationship

cache clearing

The second reason was that when you log into the orchestration client, it clears the cache for the AIS server that it is part of - magically.  So, when you run any orchestration from the client - it picks up the latest version of the orchestration - nice.  But, if you are not logging into the AIS server client component - then the cache is NOT being cleared.  Therefore postman is picking up some old crap and displaying it.

All that needs to be done is that someone needs to login to the client component of the AIS server behind the proper domain name (9062), and BOOM the cache is cleared.

We could also give our client a cURL command or something that would force the cache clear too - essentially pressing the clear cache button from the client.

These are my cache settings, cache is not enabled - but it's certainly on!

So, case closed.  We provide linked to the AIS client that is being exposed externally, client logs into said orch client (9062) and the cache is cleared and postman shows the correct data.




never underestimate the power of stderr

$
0
0
JDE debugging is lots of fun, especially on Unix based environments.  No irony, I seriously like it.

What especially rings my bell is the fact that the OS is nice and basic and you can find things easily.  I want to talk about 2 specific items that CNC people often miss when dealing with logging on Linux / Unix.

Firstly, the root dir for anything dodgey is $SYSTEM/bin64 ($SYSTEM is an environment variable that is defined for the user that runs the JDE services).  So if you are looking for output files that are not in the correct place or other strange things (see below)

-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:25 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152535.csv
-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:25 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152539.csv
-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:25 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152543.csv
-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:25 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152546.csv
-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:26 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152608.csv
-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:26 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152612.csv
-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:26 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152616.csv
-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:26 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152630.csv
-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:26 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152636.csv
-rw-rw-r--.  1 jde920 jde920   29639 Mar 18 15:26 \JDEdwards\E910\output\WorkOrders\CPG\WO_20500305_20190318_152637.csv


Okay, so we have found the missing export files, they are using the windows \ not / and therefore are created under the root directory.

Secondly, $SYSTEM/bin64/jdenet_n.log is a valuable place for stderr.  What is stderr you ask?  Well, I refer to google for a good answer.  But in simple terms it's all of the OS errors that you generally do not see in stdout.

For example, I was recently chasing a little problem around about an OSA.  And specifically whether JDE could load a 32bit OSA using a 64 bit version of  JDE...   Any guesses?

LoadLibrary - dlopen: No such file or directory
/jdedwards/e920/system/lib/libFSosa.so: wrong ELF class: ELFCLASS32: No such file or directory
libFSosa.so: No such file or directory
LoadLibrary - dlopen: No such file or directory
/jdedwards/e920/system/lib/libFSosa.so: wrong ELF class: ELFCLASS32: No such file or directory
libFSosa.so: No such file or directory
LoadLibrary - dlopen: No such file or directory
/jdedwards/e920/system/lib/libFSosa.so: wrong ELF class: ELFCLASS32: No such file or directory
libFSosa.so: No such file or directory
LoadLibrary - dlopen: No such file or directory
/jdedwards/e920/system/lib/libFSosa.so: wrong ELF class: ELFCLASS32: No such file or directory
libFSosa.so: No such file or directory
LoadLibrary - dlopen: No such file or directory
/jdedwards/e920/system/lib/libFSosa.so: wrong ELF class: ELFCLASS32: No such file or directory
libFSosa.so: No such file or directory
LoadLibrary - dlopen: No such file or directory
/jdedwards/e920/system/lib/libFSosa.so: wrong ELF class: ELFCLASS32: No such file or directory
libFSosa.so: No such file or directory
LoadLibrary - dlopen: No such file or directory
/jdedwards/e920/system/lib/libFSosa.so: wrong ELF class: ELFCLASS32: No such file or directory
libFSosa.so: No such file or directory

Well, that would tell you the answer - NO!

The interesting thing that you need to remember is that this is NOT in the jde.log for the UBE nor was it in the debug logs, all we got was:

Apr  3 12:28:27.740781  winansi.c1737    -      LoadLibrary("/jdedwards/e920/system/bin64/libFSosa.so")
Apr  3 12:28:27.740793  winansi.c1737    -      LoadLibrary("/usr/java/jdk1.8.0_192-amd64/jre/lib/amd64/server/libFSosa.so")
Apr  3 12:28:27.740799  winansi.c1737    -      LoadLibrary("/usr/java/jdk1.8.0_192-amd64/jre/lib/amd64/libFSosa.so")
Apr  3 12:28:27.740803  winansi.c1737    -      LoadLibrary("/jdedwards/e920/system/lib/libFSosa.so")
Apr  3 12:28:27.740826  winansi.c1737    -      LoadLibrary("/jdedwards/e920/system/libv64/libFSosa.so")
Apr  3 12:28:27.740830  winansi.c1737    -      LoadLibrary("/oraclient/product/12.2.0/client_64/lib/libFSosa.so")
Apr  3 12:28:27.740836  winansi.c1737    -      LoadLibrary("/oraclient/product/12.2.0/client_64/lib/libFSosa.so")
Apr  3 12:28:27.740840  winansi.c1737    -      LoadLibrary("/usr/java/jdk1.8.0_192-amd64/jre/lib/amd64/server/libFSosa.so")
Apr  3 12:28:27.740845  winansi.c1737    -      LoadLibrary("/usr/java/jdk1.8.0_192-amd64/jre/lib/amd64/libFSosa.so")
Apr  3 12:28:27.740849  winansi.c1737    -      LoadLibrary("/jdedwards/e920/system/lib/libFSosa.so")

As you can see the debug logs were not enough.

The wonderful command that spits this source of truth is found in RunOneWorld.sh
...
print "     Starting jdenet_n...">> $LOGFILE
cd $SYSTEM/$BIN_FOLDER
$SYSTEM/$BIN_FOLDER/jdenet_n > $SYSTEM/$BIN_FOLDER/jdenet_n.log 2>&1 &

Awesome, see the 2>&1  that is the important line.  This is saying that any OS errors, please send them to the $SYSTEM/$BIN_FOLDER/jdenet_n.log file.

Awesome.  Because runube is essentially started from the environment of a kernel, then it inherits the same settings, despite being a separate PID.

See more here https://www.computerhope.com/jargon/s/stderr.htm

Stderr, also known as standard error, is the default file descriptor where a process can write error messages.
In Unix-like operating systems, such as LinuxmacOS X, and BSDstderr is defined by the POSIX standard. Its default file descriptor number is 2.
In the terminal, standard error defaults to the user's screen.
So, as someone funnier than me just said, we don't have a big problem - we have a bit problem!  We need a new OSA for this bad boy to work.



JDE PrintQueue using EFS? Sure, but how much space do I have?

$
0
0
Think I have enough room in my printqueue?

We store the printqueue in EFS for a number of reasons, but are we going to run out of space?


df-km
Filesystem                                         1M-blocks  Used     Available Use% Mounted on
/dev/nvme0n1p2                                         30708 19233         11476  63% /
devtmpfs                                                7690     0          7690   0% /dev
tmpfs                                                   7711     0          7711   0% /dev/shm
tmpfs                                                   7711    17          7695   1% /run
tmpfs                                                   7711     0          7711   0% /sys/fs/cgroup
tmpfs                                                   1543     0          1543   0% /run/user/1001
fs-0be57732.efs.ap-southeast-2.amazonaws.com:/ 8796093022207 10166 8796093012041   1% /jdedwards/e920/EFSPrintQ
tmpfs                                                   1543     0          1543   0% /run/user/1002
tmpfs                                                   1543     0          1543   0% /run/user/1000


8796093012041 Mb, or 8796 petabytes

That is going to take a while to fill my PrintQueue!




JDE Object Analysis

$
0
0
Have you ever wanted to know which module in JDE is the most complex?  Which one has the most modules, the most complexity or the most code?

No…  Neither have I.  But not that I’ve created the report, I find it pretty interesting.

You can go here and slice and dice some default data using my interactive dashboard:
Make sure that you look at the report, there is so much more data in the interactive report, allowing you to filter and control the results.

This report does a lot more than what the surface indicates.  As it's actually created a unique hash of the code behind the scenes, so this allows me to compare the code between pathcodes or back to JDE to see what has actually changed at my site.  This is going to prepare me for continuous delivery.

But, this also shows the power of dataStudio from google and how a moving report is SO much more valuable than a static one.

This has 4 main pages;
The first page compares the amount of code (yes, I really want to tell you how I got this), the number of controls – yes, I also want to explain how I did this.  Take the size of code to be relative, but I’s a sum of all the “code” like objects that are stored as blobs – this includes source code for BSFN and BSSV.

I could have used a logarithmic scale to see the control count better.  I put this on page 4.
There was actually some SQL over the top of the all of the central objects files (F987*) to determine all of this.  Looking at BLOBs.

The second page is all about how many objects in each system code, try the filters out, very cool.


Finally, I have a pie chart, just because I like pies.  You'll need to go and look at that one.

This is a good summary of relative complexity of modules in JDE.  No amazing and actual value, but if you were to compare with what you actually use, then it started to get interesting!

If you want to look at your data, which can include custom system codes, this is a fairly trivial task to execute.



cheats guide to timeouts

$
0
0
On linux to change the web timeouts for JDE

Find where agent runs:

[oracle@jdepp1 SCFHA]$ ps -ef |grep java | grep -i scfagent

jde920    2554     1  0 Apr12 ?        00:08:38 /jde_home_32/SCFHA/jdk/jre/bin/java -classpath /jde_home_32/SCFHA /lib/scfagent.jar com.jdedwards.mgmt.agent.Launcher
oracle   31900     1  0 Apr26 ?        00:03:03 /jde_home/SCFHA/jdk/jre/bin/java -classpath /jde_home/SCFHA/lib/scfagent.jar com.jdedwards.mgmt.agent.Launcher

Easy, now find the relevant web.xml file:

Find /jde_home_32/SCFHA -name web.xml -print

[oracle@jdepp1 SCFHA]$ find /jde_home/SCFHA -name web.xml -print
/jde_home/SCFHA/targets/AIS_PP_JDEPP1/owl_deployment/JDERestProxy.ear/app/JDERestProxy.war/WEB-INF/web.xml
/jde_home/SCFHA/targets/WEB_PP_JDEPP1/owl_deployment/webclient.ear/app/webclient.war/WEB-INF/web.xml
/jde_home/SCFHA/targets/BSSV_PP_JDEPP1/owl_deployment/E1Services-PP920-wls.ear/app/E1Services-PP920-web.war/WEB-INF/web.xml
/jde_home/SCFHA/targets/ORCH_PP_JDEPP1/owl_deployment/OrchestrationStudio.ear/app/OrchestratorStudio.war/WEB-INF/web.xml
/jde_home/SCFHA/targets/RTE_PP_JDEPP1/owl_deployment/EventProcessor_EAR.ear/app/EventProcessor_WAR.war/WEB-INF/web.xml
/jde_home/SCFHA/targets/RTE_PP_JDEPP1/owl_deployment/JDENETServer_EAR.ear/app/JDENETServer_WAR.war/WEB-INF/web.xml

Back it up for web, then edit it

Find /web-app

        <env-entry-name>oracle/portal/provider/global/log/logLevel</env-entry-name>
        <env-entry-value>7</env-entry-value>
        <env-entry-type>java.lang.Integer</env-entry-type>
      </env-entry>

</web-app>
"targets/WEB_PP_JDEPP1/owl_deployment/webclient.ear/app/webclient.war/WEB-INF/web.xml" line 976 of 976 --100%-- col 3

Add this before </web-app>

<session-config>
<session-timeout>180</session-timeout>
</session-config>

Looks like this
      <env-entry>
        <env-entry-name>oracle/portal/provider/global/log/logLevel</env-entry-name>
        <env-entry-value>7</env-entry-value>
        <env-entry-type>java.lang.Integer</env-entry-type>
      </env-entry>
<session-config>
<session-timeout>180</session-timeout>
</session-config>
web-app>
~
For 3 hours

Do same for web.xml under user_projects

Ps -ef |grep <your JAS SERVER NAME>

[oracle@jdepp1 SCFHA]$ ps -ef |grep java |grep WEB_PP_JDEPP1 | awk -F"-DINSTANCE_HOME="'{print $2}' | awk '{print $1}'
/Oracle_Home/user_projects/domains/e1apps

Then same find
Nerdy use of flea

[oracle@jdepp1 SCFHA]$ find `ps -ef |grep java |grep WEB_PP_JDEPP1 | awk -F"-DINSTANCE_HOME="'{print $2}' | awk '{print $1}'` -name web.xml |grep stage |grep WEB_PP

/Oracle_Home/user_projects/domains/e1apps/servers/WEB_PP_JDEPP1/stage/WEB_PP_JDEPP1/app/webclient.war/WEB-INF/web.xml

Vi that and fix it too

Then JAS.INI
User session cache timeout
3,600,000 for 1 hour

In my case 10,800,000

Bounce and done!

JDE scheduler BULK password change with LDAP enabled

$
0
0
You have heaps of scheduled jobs and you need to change the password that is saved in the scheduler.  Easy, because there is a great function in admin password change, but you cannot use it when LDAP is enabled.  

This is a strange quandary too, as you need to save the LDAP password in the scheduler table.



What do you do when you need to change the password for 300 scheduled jobs?  No problems, I got you!

Just update a single record (as below, I changed the record for report 'R55HR002', version 'RBVS0010'.


Then run the following SQL:

update sy920.f91300 set sjschpwd = (
select sjschpwd from sy920.f91300 where sjschrptnm = 'R55HR002' and sjschver = 'RBVS0010')
where sjschuser like 'ZSCH%' and  sjschrptnm != 'R55HR002';
commit;

BOOM! 290 records updated. And schedules working.

Thanks to JDE for not using the job name or version name in the encryption.


More JDE scheduler frustrations

$
0
0
I have my 300 jobs, the password is right, but not I have to “reset schedule” on all of them.  when I open any of the jobs, there is no "next schedule / 5 rows apprearing in the grid".  A screen shot is below.

There is probably a PO or something much more simple than what I'm about to explain, but hang on!


See how when I look at the above, they are no listing of future scheduled jobs (or past).

So I need to open the job [as above], use the form exit of "reset schedule", then press okay and save for every one!!  This is going to take ages.  In IT, it's simple, I don't do repetitive tasks...  I automate them.

You cannot run the schedule app (P91300) from IV or fast path, otherwise I’d be tucking into a orchestration based solution – that’d be nice and easy.

So, I totally need to go old school on this, lucky I have some tricks for old school.  Lucky I am old school.

Firstly, I looked at the code of W91300B to see if I could attach that form exit to the main screen, then use repeat for grid.  The code looked bad, and there were a bunch of if statements and about 40 parameters to the BSFN that does the work to reset the schedule, I was not feeling that brave.

Secondly, I stopped and started and changed scheduler servers and reset the master, this did not help.  I did some https://support.oracle.com searches that revealed nothing.

So I bought out – captain vb script!

set objShell = wscript.createobject("WScript.Shell")

Do until success = True
  Success = objshell.AppActivate("Schedule Jobs - [Job Schedule--Canberra, Melbourne and Sydney]")
  wscript.sleep 1000
Loop

wscript.sleep 100
wscript.echo "Reset these schedules"
m=0
do until m = 300
      wscript.sleep 500
      objshell.sendkeys "%m"
      wscript.sleep 500
      objshell.sendkeys "t"
      wscript.sleep 500
      objshell.sendkeys "{Enter}"
      wscript.sleep 500
      objshell.sendkeys "%o"
      m=m+1
loop


And ran this at the command line

cmd> wscript resetSchedule.vbs

This then smashed through my 300ish jobs without a schedule and created them, nice.    There are probably much better ways of doing this.

Remember that you need to match the screen name exactly, once you select one of the records.  You don't want it working on the wrong screen.  My title once I select a row from the main screen is "Schedule Jobs - [Job Schedule--Canberra, Melbourne and Sydney" as seen below: 


Success = objshell.AppActivate("Schedule Jobs - [Job Schedule--Canberra, Melbourne and Sydney]")

What you need to do is match the number of records you are going to select with the main counter in the loop.  Highlight all of the records from P91300 then hit select.

Now, run the script at the command line.

How I saved 1000s of dollars in one afternoon

$
0
0


I implemented a fairly basic schedule engine in AWS that works on specific tags to control when the machines are up or down.  The cost of our demo kit was starting to add up, so I looked into the console (which is great) and saw that the main costs were EC2 and RDS.  To fix this I followed a two prong approach:


  1. mandatory tagging
  2. schedule for shutdown


Kinda chicken or egg, because the solution I chose defined the tagging.


I basically followed the guides below:


https://docs.aws.amazon.com/solutions/latest/instance-scheduler/welcome.html

https://s3.amazonaws.com/solutions-reference/aws-instance-scheduler/latest/instance-scheduler.pdf


At Fusion5, we defined a single active Tag for schedules and a single schedule stack:


If you use the tag name ScheduleUptime on an EC2 or RDS instance, then this instance will be on a schedule – it's that simple.The fusion5 stack ID is f5sched

If you set your instance to have tag ScheduleUptime

 

The schedules rely on 2 building blocks, periods and schedules.

A period defines the hour day of starting and stopping, kinda like cron.  The schedule uses the period, but has more controls around the behaviour of the EC2 instance when there is an issue (and importantly uses TZ).

 

For Fusion5, we started with the following schedules.


  • NZ-office-hours
  • AU-office-hours
  • AU-7till7
  • NZ-7till7

 

I think that you can work out what they actually mean!

 

1.3      Periods:

 

command to create a period:

 

root@localhost ~]# scheduler-cli create-period --begintime 07:00 --description "7 till 7 baby" --endtime 19:00 --weekdays 0-4 --name 7till7Mon2Fri -s f5sched

{

   "Period": {

      "Description": "7 till 7 baby",

      "Weekdays": [

         "0-4"

      ],

      "Begintime": "07:00",

      "Endtime": "19:00",

      "Type": "period",

      "Name": "7till7Mon2Fri"

   }

}

 

 

1.4      Schedules:

 

 

command to create a schedule:


[root@localhost ~]# scheduler-cli create-schedule -s f5sched --description "Shannon TEsting 9 to 5" --timezone "Australia/Melbourne" --name AU-office-hours --periods "office-hours"

{

   "Schedule": {

      "RetainRunning": false,

      "Enforced": false,

      "Description": "Shannon TEsting 9 to 5",

      "StopNewInstances": true,

      "Periods": [

         "office-hours"

      ],

      "Timezone": "Australia/Melbourne",

      "Type": "schedule",

      "Name": "AU-office-hours"

   }

}

 

1.5      Viewing schedules and periods

1.5.1      Command line

Install the command line

wget https://s3.amazonaws.com/solutions-reference/aws-instance-scheduler/latest/scheduler-cli.zip

 

python setup.py install

 

then you can use this, but of course you need aws command line first (https://shannonscncjdeblog.blogspot.com/2017/06/move-tb-from-nz-to-aus-via-s3-bucket-of.html)

 

 

DynamoDB


There are two tables in DynamoDB

 

The top one is the only one to worry about (they are not really tables either)

 

Basically everything is saved as JSON document:

{

  "begintime": {

    "S": "07:00"

  },

  "description": {

    "S": "7 till 7 baby"

  },

  "endtime": {

    "S": "19:00"

  },

  "name": {

    "S": "7till7Mon2Fri"

  },

  "type": {

    "S": "period"

  },

  "weekdays": {

    "SS": [

      "0-4"

    ]

  }

}

 

 


 

 


curl an orchestration with environment variables in bash

$
0
0
It's so nice when you are old and get to feel productive.  This has been a phenomenon when doing lots of JDE on AWS.

AWS have an amazing cli interface in which you can script everything.

Orchestration in JD Edwards allows you to create an API out of anything you want to do in JDE…

Finally, cURL is an amazing utility in linux.

Oh, and have I mentioned, I have a black belt in ksh and awk ?

So, you add all of this together, and I'm getting productive.

We are doing some funky things in JDE to bring it out of the dark ages and into a CI/CD pipeline on AWS.  This does take time, but we there.

For one scenario we have created some highly available batch servers that can be replaced when there is a new package deploy, that's right – they are ephemeral.

So, to enable this (not to give too much away), there has been a lot of work by SvdS and myself – mainly SvdS – if you are in the game – you know who this is.

We have a fixed IP that we can attach to the server when it becomes the new server, but to make this smooth – we need to disable all of the queues on the original.  Make sure no batch jobs are running and then move them over.

So an orchestration to hold all the queues, that makes a lot of sense.  An orchestration to release all of the queues and an orchestration to run some batch jobs – just to make sure that things are cool (and reconnect the web server to the new machine).

Let's do it:

holdAllQueues is an orchestration that I wrote (I'll include it for download) that you can enter a hostname and it'll hold the queues, wow – simple.  But have you done this manually, painful!

If you want to call this in a ksh, then:

#!/usr/bin/ksh
set -x
if [ $# -ne 1 ]
  then
    echo "USAGE $0 SERVERNAME"
    exit 1
else
  serverName=$1
fi
echo ${serverName}
data='{"MYLOGICHOST":"'"${serverName}"'"}'
echo ${data}
#exit
curl -v --request POST \
  --resolve au.jde.something.com:443:10.116.23.100 \
  --url https://au.jde.something.com/jderest/orchestrator/orch_fusion5_holdAllQueues \
  --header 'Accept: */*' \
  --header 'Authorization: Basic WU9VQVJFOkFCVVRURkFDRQ==' \
  --header 'Cache-Control: no-cache' \
  --header 'Connection: keep-alive' \
  --header 'Content-Type: application/json' \
  --header 'Host: au.jde.something.com' \
  --header 'accept-encoding: gzip, deflate' \
  --header 'cache-control: no-cache' \
  --data "${data}"

The reason that this might help is the passing an environment variable (or parameter) into cURL can be curly (like that?).

So you'll see that this simple example conversion parameter 1 from the script input as the servername in the JSON body to the orchestration that I wrote.

The other thing I have in this example is avoiding certificate problems with the –resolve option, this is great because for many reasons, these addresses are no internally resolvable. (it's me not you).

So, this shows you how to use shell variables and create a really handy function to hold all of the batch queues – perhaps before a package deployment.

This allows me to move IP's to the new machine and then start loading it up with more batch jobs without an outage – YAY!

Cool SQL queries for rows and indexes

$
0
0
always forget some of the basic SQL Server catalogs, so this is going to help me next time:

Query to get the current index definitions from the database.  Remember that you can cross reference this with some of this knowledge:  https://shannonscncjdeblog.blogspot.com/2017/06/jde-slow-missing-indexes-find-it-fast.html (namely F98712 & F98713)

Remember that all index definitions do not need to be in specs (really – sorry if you disagree), performance based indexes can sit happily in the database and do not need to be put through the SDLC.  Let's be honest, the creation process is horrendous in JDE!

The query below when cross referenced with F98712 and F98713 could tell you missing or incorrect indexes with a single statement.  Though you could also run R9698711 and R9698713 to help you out.


select i.[name] as index_name,
    substring(column_names, 1, len(column_names)-1) as [columns],
    casewhen i.[type] = 1then'Clustered index'
        when i.[type] = 2then'Nonclustered unique index'
        when i.[type] = 3then'XML index'
        when i.[type] = 4then'Spatial index'
        when i.[type] = 5then'Clustered columnstore index'
        when i.[type] = 6then'Nonclustered columnstore index'
        when i.[type] = 7then'Nonclustered hash index'
        endas index_type,
    casewhen i.is_unique = 1then'Unique'
        else'Not unique'endas [unique],
    schema_name(t.schema_id) + '.' + t.[name] as table_view,
    casewhen t.[type] = 'U'then'Table'
        when t.[type] = 'V'then'View'
        endas [object_type]
from sys.objects t
    innerjoin sys.indexes i
        on t.object_id = i.object_id
    crossapply (select col.[name] + ', '
                    from sys.index_columns ic
                        innerjoin sys.columns col
                            on ic.object_id = col.object_id
                            and ic.column_id = col.column_id
                    where ic.object_id = t.object_id
                        and ic.index_id = i.index_id
                            orderby col.column_id
                            forxmlpath ('') ) D (column_names)
where t.is_ms_shipped <> 1
and index_id > 0
orderby i.[name]

Provices something like
index_name columns    index_type unique     table_view     object_type
F0000194_PK     SYEDUS, SYEDBT, SYEDTN, SYEDLN   Clustered index     Unique     NULL Table
F0002_PK   NNSY Clustered index Unique     NULL Table
F00021_PK  NLKCO, NLDCT, NLCTRY, NLFY Clustered index Unique     NULL Table
F00022_PK  UKOBNM     Clustered index Unique     NULL Table
F0004_2    DTSY, DTRT, DTUSEQ    Nonclustered unique index     Unique     NULL Table
F0004_PK   DTSY, DTRT Clustered index Unique     NULL Table
F0004D_PK  DTSY, DTRT, DTLNGP    Clustered index Unique     NULL     Table
F0005_2    DRSY, DRRT, DRKY, DRDL02   Nonclustered unique index     Unique     NULL Table
F0005_3    DRSY, DRRT, DRDL01    Nonclustered unique index  Not unique     NULL Table
F0005_PK   DRSY, DRRT, DRKY Clustered index Unique     NULL     Table



SELECT
       sOBJ.name AS [TableName]
      , SUM(sPTN.Rows) AS [RowCount]
FROM
      [RRCDBSP01\JDE920].[JDE_PRODUCTION].[sys].[objects] AS sOBJ
      INNER JOIN [RRCDBSP01\JDE920].[JDE_PRODUCTION].[sys].[partitions] AS sPTN
            ON sOBJ.object_id = sPTN.object_id
WHERE
      sOBJ.type = 'U'
      AND sOBJ.is_ms_shipped = 0x0
      AND index_id < 2 -- 0:Heap, 1:Clustered
GROUP BY
      sOBJ.schema_id
      , sOBJ.name
ORDER BY [TableName]


Tables that are not in JDE, but are in the database:

       SELECT
       sOBJ.name AS [TableName]
      , SUM(sPTN.Rows) AS [RowCount]
FROM
      JDE_PRODUCTION].[sys].[objects] AS sOBJ
      INNER JOIN [JDE_PRODUCTION].[sys].[partitions] AS sPTN
            ON sOBJ.object_id = sPTN.object_id
WHERE
      sOBJ.type = 'U'
      AND sOBJ.is_ms_shipped = 0x0
      AND index_id < 2 -- 0:Heap, 1:Clustered
       AND not exists (select 1 from [JDE920].[OL920].[F9860] ol WHERE ol.siobnm = sOBJ.name and ol.sifuno = 'TBLE')
GROUP BY
      sOBJ.schema_id
      , sOBJ.name


Tables that are in:

       SELECT
       sOBJ.name AS [TableName]
      , SUM(sPTN.Rows) AS [RowCount]
FROM
      JDE_PRODUCTION].[sys].[objects] AS sOBJ
      INNER JOIN [JDE_PRODUCTION].[sys].[partitions] AS sPTN
            ON sOBJ.object_id = sPTN.object_id
WHERE
      sOBJ.type = 'U'
      AND sOBJ.is_ms_shipped = 0x0
      AND index_id < 2 -- 0:Heap, 1:Clustered
       AND exists (select 1 from [JDE920].[OL920].[F9860] ol WHERE ol.siobnm = sOBJ.name and ol.sifuno = 'TBLE')
GROUP BY
      sOBJ.schema_id
      , sOBJ.name



Stolen from:

converting weblogic processes into services - and getting things to autostart on windows

$
0
0
Here is a simple tip for anyone to be a better technical person.  If you ever build a machine, please ensure that everything starts automatically.  You need scripts and you need to test that when the machine starts, everything starts.  You job is NOT done until this occurs.<!--[if !mso]><![endif]


SETLOCAL
set DOMAIN_NAME=E1_Apps
set USERDOMAIN_HOME=C:\Oracle\Middleware\Oracle_Home\user_projects\domains\E1_Apps
set SERVER_NAME=9201_PS920_F5WEB
set PRODUCTION_MODE=true
set ADMIN_URL=http://10.10.1.108:7001
call "C:\Oracle\Middleware\Oracle_Home\user_projects\domains\E1_Apps\bin\setDomainEnv.cmd"
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9202_DV920_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9204_MKT920_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9205_PLAY920_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9206_CONFIG_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9208_PLAY920_AISHTML
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9212_DV920_AIS_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9215_PLAY920_AIS_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9222_DV920_ORCH_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9225_PLAY920_IOT_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9235_PLAY920_ADF_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9245_DV920_BSSV_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
ENDLOCAL




Shannon Moir


Director Digital Innovation


M + 61 401 272 799P +61 3 8530 8640Skype for Business IDShannon.Moir@Fusion5.com.au





 ""








-->

Here is a quick trick for registering all of your weblogic processes as services, and therefore getting them to start automatically. 

This will be beneficial when you are looking at cloud and looking to save money.

Firstly you'll find "install

There is a template file in C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin which probably has the wrong dirs specified (if you did not keep standard, or even if you did).  Check the path of the last two lines.

In C:\Oracle\Middleware\Oracle_Home\user_projects\domains\E1_Apps\bin you'll have 
installSvc.cmd for the node manager, run this.

Then create a copy of installSvc, and make it look something like the below.

Note that the names of the servers are from:
C:\Oracle\Middleware\Oracle_Home\user_projects\domains\E1_Apps\servers>dir /B
12345-myAccess-standalone
9201_PS920_F5WEB
9202_DV920_F5WEB
9204_MKT920_F5WEB
9205_PLAY920_F5WEB
9206_CONFIG_F5WEB
9208_PLAY920_AISHTML
9212_DV920_AIS_F5WEB
9215_PLAY920_AIS_F5WEB
9222_DV920_ORCH_F5WEB
9225_PLAY920_IOT_F5WEB
9235_PLAY920_ADF_F5WEB
9245_DV920_BSSV_F5WEB
AdminServer
AdminServerTag
domain_bak
testing

Create your file and run it, then you'll have services!

SETLOCAL
set DOMAIN_NAME=E1_Apps
set USERDOMAIN_HOME=C:\Oracle\Middleware\Oracle_Home\user_projects\domains\E1_Apps
set SERVER_NAME=9201_PS920_F5WEB
set PRODUCTION_MODE=true
set ADMIN_URL=http://10.10.1.108:7001
call "C:\Oracle\Middleware\Oracle_Home\user_projects\domains\E1_Apps\bin\setDomainEnv.cmd"
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9202_DV920_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9204_MKT920_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9205_PLAY920_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9206_CONFIG_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9208_PLAY920_AISHTML
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9212_DV920_AIS_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9215_PLAY920_AIS_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9222_DV920_ORCH_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9225_PLAY920_IOT_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9235_PLAY920_ADF_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
set SERVER_NAME=9245_DV920_BSSV_F5WEB
call "C:\Oracle\Middleware\Oracle_Home\wlserver\server\bin\installSvc.cmd"
ENDLOCAL



License Audit for Oracle ERP Cloud

$
0
0

A client came to me the other day and asked if we could implement some sort of licence audit over the top of the oracle cloud applications (ERP Cloud).  This could easily be applied to Engagement Cloud.  This was an opportunity to implement usage insights over the top of another ERP (We’ve done a few now).

Know what your users are doing.  Know what programs are being used, compare this to what you are paying.  You can compare historical periods, a complete 360 degree view of your cloud usage.

More importantly, when you are paying per user per month, you really need to understand who is using what – so you can manage your costs.  It’s classic cost controls, you want to ensure that you are getting what you pay for.

We were able to do a quick modification in ERP Cloud, and now we can see what ALL of our users are doing all of the time!


We are able to filter on country, environment, module and more



We can then drill down to module in oracle, the oracle module is calculated from the page title, where it is in the format of:

“Program – module – Oracle Applications” – you can see this below.  We’ve done some internal work mapping these modules to oracle pricing modules and have put them into the price column.



We are able to then look at holistically what the aggregate module usage is and estimated cost:

We can then actually look into the usernames and what they are doing.


The combination of all this information will allow you to start to fine tune your oracle licences and save money.

These reports can be scheduled.  We can also compare with invoice data to give you the net difference if you like.

UBE Performance suite - with a dash of cloud and AI

$
0
0

Understand your batch performance, immediately and over time.


Batch performance in JD Edwards is a strange one.  You only give it ANY attention when it’s diabolical…  If it’s reasonable then you leave it.  My clients start to get nervous about batch performance when they are getting close to the start and the finish of their batch windows.  Another classic example of batch performance getting attention is when scheduled jobs do not finish or there is a problem in the evening.

I like to be a little more proactive in this situation and have developed some insights with my team to allow you to quickly identify trends, oh – and then we’ve sprinkled a little bit of AI over the top to give you some amazing exception handling.  That’s right, AI in JD Edwards UBE processing – all will be revealed.

Firstly we need a mechanism of taking data out of the JD Edwards tables that are rich with UBE execution details, we upload them into google big query and then report over this data with some great dashboards.  We accelerate the value in this process by plugging each execution into AI and asking it whether this was a valid result – given the past results of that UBE. 

Firstly we have an agent that can run on any machine “on premise” that has internet access and access to your server map data sources.  It’d got some intelligence built in so that you can schedule it at a cadence that you like, and it’ll extract the relevant records and place them into some cloud storage [secured by keys and tokens and encryption and more]. 

I know a pretty graph is not normal in JDE (this can be hosted as cafe1 or e1page too) so that you see all of the relevant information at the source.



What this pretty graph can do is give you KEY metrics on all UBE processing, like rows processed, time taken and number of executions.  You have controls where you can slice and dice this interactively:









If you choose a particular environment (as above), user or date range, all reports and graphs are going to change.  You can look at particular queues or batch servers if you like


The example above shows the jobs for JDE and SCHEDULER and only the JPD920 environment – to narrow your focus.

We then provide a number of screens, depending on what you are after:


If you are looking for the history and trend line of a single job, you look at the job focus report:


We can see actual processing times, how many times run, who is running the jobs and how long the job is taking on a regular basis.  This is great trend information.  Also, we do not purge your cloud data – so you can do complete analysis on what jobs are running and who are running them – while keeping your ERP lean and mean.  We could even put your output in the cloud if you want – much cheaper storage!


I really like the graph above, this shows me ALL history of ALL jobs and how long they are taking on average and how many rows they are processing.  This is really valuable when looking for potential improvements.

See how many jobs are running at each hour of the day – knowing when the hot spots are for batch

You can look at your queues and find out what queues are quiet for the next processing opportunity.

You can get some great insights to solve performance problems, to know who is running what, and to keep your complete batch history.

Now for the AI

I’m a victim of technology, I want to put AI into everything – and this is a great use case.  We have the ability to look at things like return codes, rows processed, time of day and runtime and use AI to determine if the metrics are expected.  If the algorithms (that have been trained with ALL your historical data) think that there is an issue with any of those dimensions, they can raise an exception to you.  This is great for what I call “silent killers”.  If a batch job generally processes 40000 rows and processes 0 one night, it’ll still finish with status ‘D’ – yet AI is sure to determine that this is an exception and it’ll send you a message.  That is going to save time and money when fixing all the scheduled jobs that run without sales update having been run properly!  The nice thing about AI, is that is looks at the time of day and makes genuine decisions about the exceptions.

We run this as an end to end service, allowing clients to have access to all consoles and reporting.  We can also schedule any of the reports to be delivered at a cadence that suits.  Reach out if you want to know more about your batch processing!  There is a small monthly cost to the service.

Viewing all 542 articles
Browse latest View live