Quantcast
Channel: Shannon's JD Edwards CNC Blog
Viewing all 541 articles
Browse latest View live

Garbage collection and tuning the JVM for JDE

$
0
0

This starts to get pretty complex pretty quickly.

We’ve noticed that a JDE web install on Azure feels slow, something feels wrong.

You coad load up 20 users with OATS and watch weblogic 12.2.1.2.0 start to slow down

Actually, speed is not the problem, the machine grinds to a halt with CPU usage.

We are using a super simple JD Edwards test, just basic navigation around P01012 and WSJ.  Could not get easier – you’d think

The machine is on struggle street.

We are seeing the GC’s on the machine:

51042DBF

Look at the GC’s!

Too bouncy.  This is 25 users doing the same thing and logging in at different times.  There should be completely flat lines.


image

Wow, we are getting a full GC which is a total lockup of the JVM pretty much ALL of the time.  Taking up to 9 seconds at times.

We need to get a fix for this.

What you quickly learn about GC in JVM’s is that there are 10000 parameters and without knowing how the objects are allocated by JDE, JDBC etc it’s hard to know which levers you should be pulling.

The tip here it, what levers is java pulling by default?

Let’s find out.

create a file called hello.java and paste in the following:


public class hello {

    public static void main(String[] args) {

        // Prints "Hello, World" to the terminal window.

        System.out.println("Hello, Garbage");

    }

}

image

Then javac hello.java

Do this from the jdk that is running your JVM.  you might need to ps or task manage your way to finding this part of the command line.

image


C:\Program Files\Java\jdk1.8.0_162\bin>javac hello.java

C:\Program Files\Java\jdk1.8.0_162\bin>java -Xloggc:c:/GC/%t_gclogger.log -XX:+PrintGC -XX:-PrintGCDetails hello

Hello, Garbage


You are telling java to be verbose and print you what it’s starting with, nice.

So now you can goto c:\GC and find out what (windows 2016 in my case) is defaulted to run with java


Java HotSpot(TM) 64-Bit Server VM (25.162-b12) for windows-amd64 JRE (1.8.0_162-b12), built on Dec 19 2017 20:00:03 by "java_re" with MS VC++ 10.0 (VS2010)

Memory: 4k page, physical 16776756k(7382136k free), swap 20970920k(5873592k free)

CommandLine flags: -XX:InitialHeapSize=268428096 -XX:MaxHeapSize=4294849536 -XX:+PrintGC -XX:-PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:-UseLargePagesIndividualAllocation -XX:+UseParallelGC 

So this is the naked truth for what the windoze JVM thinks that it should start with.

you can find all of the options listed here http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html

Using this iknowledge for JDE WLS is easy, at least you are going to know what the default params are.  The other nice this is that if you enable verbose GC logging, you’ll be able to check the JVM history nice and easy


Java HotSpot(TM) 64-Bit Server VM (25.92-b14) for windows-amd64 JRE (1.8.0_92-b14), built on Mar 31 2016 21:03:04 by "java_re" with MS VC++ 10.0 (VS2010)

Memory: 4k page, physical 16776756k(11436092k free), swap 20970920k(14323404k free)

CommandLine flags: -XX:+AggressiveOpts -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=2147483648 -XX:+PrintGC -XX:-PrintGCDetails -XX:+PrintGCTimeStamps -XX:+TraceClassLoading -XX:+TraceClassResolution -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:-UseLargePagesIndividualAllocation -XX:+UseParallelGC

9.772: [GC (Metadata GC Threshold)  356941K->37709K(2010112K), 0.0497603 secs]

9.822: [Full GC (Metadata GC Threshold)  37709K->36962K(2010112K), 0.1157298 secs]

12.303: [GC (Metadata GC Threshold)  104455K->51591K(2010112K), 0.0146142 secs]

12.318: [Full GC (Metadata GC Threshold)  51591K->32982K(2010112K), 0.0572060 secs]

12.490: [GC (System.gc())  56999K->33214K(2010112K), 0.0015203 secs]

12.492: [Full GC (System.gc())  33214K->23886K(2010112K), 0.1942664 secs]

24.818: [GC (Metadata GC Threshold)  518419K->68032K(2010112K), 0.0430348 secs]

24.862: [Full GC (Metadata GC Threshold)  68032K->66171K(2010112K), 0.2901091 secs]

77.256: [GC (Allocation Failure)  590971K->231300K(2010112K), 0.1385489 secs]

85.555: [GC (Allocation Failure)  756100K->287925K(1813504K), 0.1240276 secs]

88.482: [GC (Allocation Failure)  616117K->264082K(1911808K), 0.0430451 secs]

91.890: [GC (Allocation Failure)  592274K->262682K(1938944K), 0.0406183 secs]

95.013: [GC (Allocation Failure)  633882K->264546K(1927680K), 0.0386120 secs]

97.406: [GC (Allocation Failure)  635746K->261578K(1955840K), 0.0398701 secs]

104.722: [GC (Metadata GC Threshold)  564677K->278930K(1948672K), 0.0511449 secs]

104.773: [Full GC (Metadata GC Threshold)  278930K->126063K(1948672K), 0.2996287 secs]

271.829: [GC (Allocation Failure)  535151K->179467K(1964032K), 0.0409892 secs]

1712.011: [GC (Allocation Failure)  606987K->143383K(1958912K), 0.0272208 secs]

2124.193: [GC (Allocation Failure)  570903K->188328K(1969152K), 0.0613237 secs]

2130.849: [GC (Allocation Failure)  624552K->211110K(1962496K), 0.0687304 secs]

2140.347: [GC (Allocation Failure)  647334K->246768K(1937920K), 0.1185877 secs]

2145.322: [GC (Allocation Failure)  655856K->299962K(1952256K), 0.1423171 secs]

2152.052: [GC (Allocation Failure)  709050K->347231K(1825280K), 0.1426725 secs]

2153.875: [GC (Metadata GC Threshold)  460486K->366841K(1845248K), 0.1227049 secs]

2153.997: [Full GC (Metadata GC Threshold)  366841K->275362K(1845248K), 1.1694144 secs]

2158.558: [GC (Allocation Failure)  557474K->297258K(1872384K), 0.0199824 secs]

2161.004: [GC (Allocation Failure)  547626K->304734K(1872896K), 0.0478987 secs]

2165.078: [GC (Allocation Failure)  555102K->336275K(1885696K), 0.0419868 secs]

2167.680: [GC (Allocation Failure)  602515K->398773K(1875968K), 0.1051175 secs]

2171.856: [GC (Allocation Failure)  665013K->435986K(1853440K), 0.1449460 secs]

2175.736: [GC (Allocation Failure)  677138K->464323K(1828352K), 0.1165887 secs]

2180.276: [GC (Allocation Failure)  705475K->492642K(1864192K), 0.1381040 secs]

2183.488: [GC (Allocation Failure)  725602K->511599K(1864192K), 0.1451506 secs]

2187.965: [GC (Allocation Failure)  744559K->535714K(1864192K), 0.1557600 secs]

2191.799: [GC (Allocation Failure)  768674K->549443K(1864192K), 0.1302633 secs]

2195.937: [GC (Allocation Failure)  782403K->576326K(1864192K), 0.1069931 secs]

2198.253: [GC (Allocation Failure)  809286K->594966K(1864192K), 0.0552789 secs]

2201.184: [GC (Allocation Failure)  827926K->614134K(1864192K), 0.0333154 secs]


I still have this to deal with for 20 users:

image

Wish me luck!


A long was to tell a short story–HAFS mounts on an ODA for PrintQueue

$
0
0

Everyone wants to create a disposable compute environment, it’s the right thing to do.

IF your machines / servers are stateless, then this is the first step to being elastic and more portable – think containers…  I know that I’m talking about an ODA here, but you can still put constructs into the design that allow you to be more flexible for HA and DR…  that is taking stateful data from your machines and creating a level of abstraction.

So, helping out with JDE, PrintQueue needs to go!

If you want to make your environment elastic, then eventually you need to put printqueue somewhere else and mount the location on your enterprise server.

Imagine that you did all of this, and then when the filesystem was mounting on boot, it wanted to FSK the mounted drive, and perhaps you do not have a network yet…  Guess what happens – NOTHING

Imagine if this was a ODA and the VM did not really give you great access to grub – wow – you got a problem!

Welcome to my world!

/etc/fstab looked something like:

10.255.252.180:/u01/app/sharedrepo/printqdv /mnt/printqueue  nfs nfsvers=3,rw,bg,intr,0 0

easy – leaving this automatic.  the use of hard is implied if not specified, so we do not need this in the mount options.

We created this FS on ODA_BASE as a repo

oakcli create repo printqdv -size 200G -dg DATA

Then on ODA_BASE created the NFS server using grid based srvctl (HAFS) which

srvctl stop exportfs -name printqdv
srvctl remove exportfs -name printqdv
  srvctl add exportfs -name printqdv -id havip_1 -path /u01/app/sharedrepo/printqdv -clients 10.255.252.150 –options rw,no_root_squash"
srvctl start exportfs -name printqdv

So we have a shared printqueue as a repo on ODA_BASE that all of the guests can mount using NFS.  Therefore when using WSJ, all jobs are in a single location and we can support seamless augmentation of logic hosts.

But, when automounting on guests (enterprise servers), we found though is when the machine needed to FSK, it wanted to do this to the printQueue and would not boot.

This is highly risky, we implemented the following:

10.255.252.180:/u01/app/sharedrepo/printqdv /mnt/printqueue  nfs nfsvers=3,rw,bg,intr,_netdev 0 0

adding _netdev to try and tell the boot sequence to only attempt this if there was a network, that should be much nicer.  Though I still am a little worried that this is a massive FS and I don’t want to wait for an FSK EVER!

But, could I risk this?  I want to do noauto and have a systemctl command mount the printqueue manually after boot.

go to this dir as root

/etc/systemd/system

create a new file (use a name that you want for your service)

vi jdePrintQ.service

Add the following contents

[Unit]
Description=JD Edwards PrintQueue
After=network.target

[Service]
#these need a full path
ExecStart=/usr/bin/mount /mnt/printqueue
ExecStop=/usr/bin/umount /mnt/printqueue

[Install]
WantedBy=multi-user.target

Now chmod

chmod 755 ./jdePrintQ.service

you can now use the command systemctl start jdePrintQ and she’ll start – easy

You can stop it too

systemctl stop jdePrintQ

get the status

systemctl status jdePrintQ

Aug 08 12:41:29 bear. systemd[1]: Started JD Edwards PrintQueue.
Aug 08 12:41:29 bear. systemd[1]: Starting JD Edwards PrintQueue...
Aug 08 12:41:29 bear. mount[3351]: mount.nfs: /mnt/printqueue is busy or already mounted
Aug 08 12:41:29 bear. systemd[1]: jdePrintQ.service: main process exited, code=exited, status=32/n/a
Aug 08 12:41:29 bear. systemd[1]: Unit jdePrintQ.service entered failed state.
Aug 08 12:41:29 bear. systemd[1]: jdePrintQ.service failed.

I guess that this is belt and braces, but rescuing a non booting VM on an ODA is not the most fun job in the world.

Just ensure that fstab now looks like this too:

10.255.252.180:/u01/app/sharedrepo/printqdv /mnt/printqueue  nfs nfsvers=3,rw,bg,intr,_netdev,noauto 0 0

The addition of noauto

Make sure you set your service to start on boot:

[root@bear system]# chkconfig jdePrintQ on
Note: Forwarding request to 'systemctl enable jdePrintQ.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/jdePrintQ.service to /etc/systemd/system/jdePrintQ.service.


It’s really important that you are ready to rescue a VM’s boot disk or any disk for that matter on the ODA.  Remember that if a repo runs out of space, or there is an ACFS problem (which we had lots of) that prevents ODA_BASE seeing / writing to the repo.  You are probably going to get corrupt machines – or at least the need to FSK drives.  Make sure that you have the ability to clone a boot disk and mount it to a temporary guest so that you can fix any problems with the /etc/fstab or other files that might be giving you problems on boot.  Perhaps you can stop services too.  I did NOT have a problem getting to the console of the machine.

Load testing JD Edwards–some load testing tips from the field?

$
0
0

I’ve said it before and I’ll say it again, load testing must be one of my favourite and rewarding consulting gigs.  I really like it, probably revealing a little too much about myself here.

I like constantly pursuing bottlenecks and trying to give clients confidence that the changes are going to make a difference and that they are pushing their hardware and software to the limit – getting the best bang for buck.

The age old question is – how hard do you push?

Here are a couple of things that I live by when load testing:

  • choose everyday transactions – sales order entry, PO entry, address book find, launch a simple job.  Do not just hit the system with all of the slow and hard transactions, you are not going to test anything
  • Do not go to the end of large grids
  • know the type of transactions that you are performing, are they logic intensive or are they data intensive and understand their individual timings and more importantly their consistency.
  • get your users waiting more than 3 seconds per interaction with the browser, you would be surprised when you see actually how hard your JDE is hit
  • Include a mix of batch and interactive
  • make sure that you count records before and after so that you know transactions are hitting the database
  • make everything repeatable.  Add a PO, approve a PO, print a PO – so you do not run out of data
  • dates should be variables
  • Don’t start near the end of a month, you’ll have date problems and warning problems before you know it
  • Use OATS – it’s great for JDE load testing
  • get a specialist (like me) in to do the job.  It’s a tough gig to set this up and run it yourself.

But, back to the question, how many users?

Here are some stats that you may or may not believe.

For a site that has about 400 active JDE users a week approximately.

image

This is cookie based.

They’ve loaded 1.4 million pages and server over 37000 logins

image

Wait, that is cool, that means 37 pages per second – wait ERP analytics tells us this!

And we can see that the timeout is probably 30 minutes!

But, we can also tell the peak periods of usage.

A simple custom report can tell me pageviews per minute, at a peak of 117

image

and pageviews per hour, at a peak of 3405

image


This is really good data for load testing.

If we looked at this basically we’d want to load test 1394000 million pages and try and divide out a value for # days, 10 hours a day etc etc…

1394000/13x5 (weeks work days)/10 hours per date = 2144 an hour – but the peak is 3405 using data (1.5 times the estimate)

Taking this to another level 2144/60=35 pages a minute, which is more than 3 times less than the peak!

Load testing for this client could take many different forms, but if I’m loading more than 117 pages a minute – then we are net positive.

What am I testing you ask?  21 a second… or 1260 pages a minute!  so I’m actually load testing at 10 times their maximum load for the last 3 months.  I think that I can wind back the testing wait time and sit back and relax!

Oracle SE2 license clarification for oracle technology foundation

$
0
0


I think it's really important to know your rights when it comes to database licensing and the cloud.

I'm only going to talk about the database here, that it to say - oracle SE2.

What can I do with SE2, are there going to be significant performance issues if I used SE2? Perhaps, or perhaps not...



Description of support for SE2:In September, 2015 Oracle announced the withdrawal of Oracle Database Standard Edition and the availability of a new product, Oracle Database Standard Edition 2. This announcement is relevant for all JD Edwards customers who have licensed Oracle Technology Foundation for JD Edwards EnterpriseOne because this product includes a limited use license for Oracle Database Standard Edition.In light of the new Oracle Database Standard Edition 2 product offering, Oracle Technology Foundation for JD Edwards EnterpriseOne has been updated to include a restricted use license of Oracle Database Standard Edition 2. Refer to the JD Edwards EnterpriseOne Licensing Information User Manual for a detailed description of the restricted use licenses provided in the Oracle Technology Foundation for JD Edwards EnterpriseOne product.


augmented with


Licence informationOracle Technology Foundation for JD Edwards EnterpriseOne may be licensed instead of EnterpriseOne Core Tools and Infrastructure for customers wanting the Oracle components but are not currently licensed for EnterpriseOne Core Tools and Infrastructure. The Oracle components included with Oracle Technology Foundation for JD Edwards EnterpriseOne are listed under "Entitled Products and Restricted User Licenses" below.Oracle Technology Foundation for JD Edwards EnterpriseOne would cover the JD Edwards EnterpriseOne Core Tools and Infrastructure prerequisite requirement.
Entitled Products and Restricted Use LicensesA license for Oracle Technology Foundation for JD Edwards EnterpriseOne includes the restricted-use licenses of: Oracle Database Standard Edition 2; Oracle Internet Application Server Standard Edition; Oracle WebLogic Server Standard Edition; JRockit JVM; Oracle Application Server Portal; Oracle WebCenter Services; Oracle BPEL Process Manager; Oracle Business Activity Monitoring; Oracle Application Server Single Sign-On; Oracle Access Manager Basic; Oracle Application Server Web Cache; and Business Intelligence Publisher (formerly XML Publisher).
As noted in the preceding paragraph, a license for Oracle Technology Foundation for JD Edwards EnterpriseOne includes a restricted-use license for Oracle Database Standard Edition 2. Oracle Database Standard Edition 2 may be used solely in conjunction with any and all JD Edwards EnterpriseOne programs licensed under your agreement, including third party programs licensed for use with JD Edwards EnterpriseOne programs. Oracle Database Standard Edition 2 may be installed on an unlimited number of processors. When used with Oracle Real Application Clusters, Oracle Database Standard Edition 2 may be installed on any number of RAC nodes. If you require features and functions beyond those included with the Oracle Database Standard Edition 2, or if you require use of Oracle Database beyond your JD Edwards EnterpriseOne implementation, you may purchase a non-limited use license by contracting directly with Oracle or one of its authorized distributors.
A license for Oracle Technology Foundation for JD Edwards EnterpriseOne also includes a restricted-use license for the following components of Oracle Fusion Middleware: Oracle Application Server Standard Edition or Oracle WebLogic Server Standard Edition (either of these products may be used, but both products cannot be used for the same function); Oracle Application Server Portal; Oracle WebCenter Services; Oracle BPEL Process Manager; Oracle Business Activity Monitoring; Oracle Application Server Single Sign-On; Oracle Access Manager Basic; Oracle Application Server Web Cache; and Business Intelligence Publisher. These components may be used solely in conjunction with any and all JD Edwards EnterpriseOne programs licensed under your agreement, including third party programs licensed for use with JD Edwards EnterpriseOne programs. These components may be installed on an unlimited number of processors. If you require use of these components beyond your JD Edwards EnterpriseOne implementation you may purchase a non-limited use license for any of the Oracle components by contracting directly with Oracle or one of its authorized distributors.
As noted in the preceding paragraph, a license for Oracle Technology Foundation for JD Edwards EnterpriseOne includes a restricted-use license for Oracle Business Intelligence Publisher. Oracle Business Intelligence Publisher may be used to create or modify reports that use the Oracle supplied database schema, or modifications to that schema done to support modifications to supplied Oracle JD Edwards EnterpriseOne programs. For the avoidance of doubt, examples of uses that are not permitted include, but are not limited to, the following: adding new reports that support different applications or database schemas other than JD Edwards EnterpriseOne.
Summary


Point 1: Oracle Database Standard Edition 2 may be used solely in conjunction with any and all JD Edwards EnterpriseOne programs licensed under your agreement

Q:  Does this mean things like DSI or reportsnow? I'm told YES


Point 2: Oracle Database Standard Edition 2 may be installed on an unlimited number of processors

Wow, so if you use RAC, you can have unlimited nodes for JDE with standard edition.

Point 3: Okay, you cannot used statistics and performance packs, that's a shame. But if you have a mature environment - you'll get away with it.

Point 4:
data-guard. RDS is pretty much looking after this for you (and all the complexity). They are maintaining and shipping your logs to a remote replica and doing all of the network aliasing behind the scenes. This is SO easy to implement.

AWS summary:

This is where things get a little exciting. Lets look at RDS options for Se2





AWS tells us (and I believe their lawyers have done the work), that they can commission a 16 way SE2 machine with BYOL. And as clearly stated about, you have BYOL for as many cores as you can point your SE2 at... but limited to 16 because SE2 does that.

You could be addressing 16 cores and 488 GB of RAM under your standard JD Edwards agreement. For a multi AZ deployment in this situation, you'd be paying approximately 12,000 US per month, but a more modest 16 way with 122GB of ram is 3500 a month - not bad. 

With my load testing experience in JDE, 16 cores goes a LONG way - you don't need parallel either for JDE (IMHO).

Remember that this is Highly available (without RAC), so you actually have two machines ready to process your requests if AWS lose an availability zone.




Above you can see all of the server classes that you can address for SE2.

I appreciate that oracle will probably have a different view on this and I recommend that you seek specific advice before acting on the above recommendation / opinion.








SQL Server miracles

$
0
0

I have not used SQL Server for ages, in fact – it’s been spotty throughout my career.

But, I was just doing some hectic Media Object massaging (F00165) – I’m converting them from type 1 (physical files) to type 5 HTML links.

My SQL caused a duplicate for a large insert, but look at what SQL tells me!!!

image

It tells me the bad record!!!

How many group by queries has this just saved me, so awesome.

powershell command line options starter kit

$
0
0

I don’t know how I’ve been able to do this until now, but I’ve avoided powershell until last week.

I might quickly become addicted to it though, after my brief introduction.

I have a requirement to upload files into sharepoint from a UBE, simple enough.

I got some amazing fusion5 people to cut me a script, and it’s great for 90% of scenarios, but I need to do a couple of mods for 100% of scenarios.

My modifications revolve around the ability to process parameters, $? $# etc – I miss ksh.

So, here is some tips for me in 3 months time when I need to do this again.

First and foremost, your Param function needs to be 1st line of the script!


Param (
     [string]$filetoupload,
     [string]$username = $( Read-Host "Input username, please" ),
     [string]$password = $( Read-Host "Input password, please" ),
     [switch]$force = $false,
     [switch]$run = $false
)

This terseness is AWESOME!

When calling this function with the following, it assigns all of the variables

PS C:\fusion5> .\uploadContentsToSP_BM_Test_singlefile.ps1 -filetoupload c:\shannon.pd -username shannon.moir@fusion5.com.au -password hello

So, if I did a:

write-host $username

in the script, it’d tell me the username.  The other nice thing is that if you do not specify the parameter and there is a $read-Host directive, it’s going to prompt only for the items that have not been entered.  So cool!

Also, you’ll see that I’ve used a mix of [switch] and [string] parameters.  string of course is a variable, but switch is cool – it’s binary

if your code, you can then use:


if ($force) {
write-host "file to upoload: " $filetoupload
}

So therefore, if the script is called with –force then it’ll execute the write-host function – easy


Param (

    [Parameter(Mandatory=$true)]
    [string]$filetoupload,
     [string]$username = $( Read-Host "Input username, please" ),
     [string]$password = $( Read-Host "Input password, please" ),
     [switch]$force = $false,
     [switch]$run = $false
)

Calling this:

PS C:\fusion5> .\uploadContentsToSP_BM_Test_singlefile.ps1 -filetouplad c:\shannon.pd -username shannon.moir@fusion5.com.au -password hello
C:\fusion5\uploadContentsToSP_BM_Test_singleFile.ps1 : A parameter cannot be found that matches parameter name
'filetouplad'.
At line:1 char:45
+ .\uploadContentsToSP_BM_Test_singlefile.ps1 -filetouplad c:\shannon.p ...
+                                             ~~~~~~~~~~~~
     + CategoryInfo          : InvalidArgument: (:) [uploadContentsT..._singleFile.ps1], ParameterBindingException
     + FullyQualifiedErrorId : NamedParameterNotFound,uploadContentsToSP_BM_Test_singleFile.ps1

So if you do not specify the mandatory parameter (see that I have a spelling mistake) we get the error that the parameter has not been specified

It could be more graceful if you manage it (less big red writing)

The directive only affects the following parameter, not all of them.

Use Google to Find My Oracle Support Content

$
0
0

What??  This could be great.  Use google search for https://support.oracle.com 
You can now use Google and other search engines to find My Oracle Support Content!
Go to Google and search using all of the Google capabilities you have come to rely on.
Even better, bookmark site:support.oracle.com and search Oracle content exclusively.


Nice announcement in Nov last year, but does not really provide the results that I need.

For example: site:support.oracle.com e1 jvm size



Wow, how cool! or is it?  I'm going to need to fine tune my searching




Not really what I want - not enough information

but...

I get a lot more information from the actual support site.  I get a lot more suggestions and quality documentation.



Can I find anything in particular?  Let's search for exact knowledge documents, cannot find them...

I cannot find this for example:



A bit of a toy, a bit hit an miss for me.

Infocus and Summit–what did I learn?

$
0
0

Firstly it was great to see so many familiar faces, and meet some new people.  I had a number of people come and say hi at the conference and said thanks for blogging – that may have been my favourite part of the conference, for the people that did come and say hi – I say thanks!

Infocus is a quest run customer event, specific for JD Edwards clients and partners.   This was followed by a 2 day partner conference, known as summit.  Both conferences were in Denver Colorado, who spoilt attendees with 30 degree (celsius) days with 0 humidity.

I must admit that when I went into the conference, I was worried.  I was a little worried about investing strategically into JD Edwards.  My thoughts were somewhat reiterated when I did not hear about any huge announcements.  But, then I heard Lyle at summit (a lot of clients will not hear this), and I was once again excited about what was to come.  I don’t know what specifically Lyle did say to change my opinion, but it was made up of the following:

  • Acknowledgement of the formation of a specific sales team at oracle that are rewarded and driven to sell JDE – cool
  • Premiere support of JD Edwards 9.2 to at least 2030 – amazing!
  • That NetSuite and cloud ERP are going to be the dominant SaaS offerings, but there is a lot of whitespace between these two products
  • Lyle showed passion for JD Edwards and wanting to “protect his turf” when it came to SaaS sales trying to take away the JDE opportunities
  • reiteration of what JDE is and what it can do for an organisation -
    • awesome integration is easy now with AIS –> orchestration
    • awesome integration with Cafe1
    • useability through the roof with all of the UDO’s that are supporting citizen development
  • Remember that ERP’s (especially mature ones) are going to create sales orders, track stock and do all of the basic transactions that you need.  JD Edwards especially is a mature product that will do ALL of these things easily.  Now you need to look at how you are going to augment this base functionality.  You’ll do that by embracing all mega-trends – vis-a-vis this diagram:


image

In this diagram I’ve tried to show that JD Edwards is awesome and does what it is told.  Great security, great database, stable, single source of truth.  As clients we need to respect master data and expose JDE securely using great integration (AIS / Orchestration) to enable amazing technology to be used and consumed side by side with your ERP.  You do not need to wait for AI enabled ERP – you can have it today!

This is being put into action today,

  • Fusion5 are already using AI to interpret images attached to media objects and indexing and reporting on these.  This is a SIMPLE integration that can be plugged into any public cloud ERP implementation.  And this is just the beginning.
  • Fusion5 are integrating IOT devices and raising work orders when things get too hot or too humid – this is being done today.
  • Any many more examples of using the strength of your core ERP – but being innovative about hooking into new technology.

But, how can everyone do this? I personally think that you need 3 capabilities to deliver continuous innovation.

image

Both summit and infocus reiterated the need for all clients to get to 9.2 and start to embrace configuration not code.  reduce your technical debt by retiring modifications, which will allow you to embrace continuous innovation.

Get a partner to help your (like fusion5) adopt continuous innovation, but keeping you code current.  Fusion5 have CD3 (continuous development, continuous deployment, continuous delivery) which allows continuous innovation.  It’s important to start to get economies of scale by getting a partner to help you stay code current and allow you to focus on business while we take are of the platform.

If you are looking for ways that you can reinvigorate your JD Edwards and your passion for innovation, think about having a hackathon, or perhaps an innovation bootcamp.  We run these internally and for clients and they are a great way of coming up with ideas that could make a difference to your business or perhaps change an industry.

I attended a couple of great interactive sessions on 1 click provisioning and 64bit JD Edwards – two things that are going to affect you and your JD Edwards installation positively.  I also attended a great session on containers and JDE, if you are not looking into this technology (as I have said before) you should be.  This is going to change the way we think about large systems.

9.2.3 is going to contain some great enhancements (especially in orchestration) and you should look closely for the next large oracle conference (Open World?) for this to be announced.

My summary

What it lacked in large announcements, it made up for in core messaging and core product acknowledgment.  Embrace continuous delivery and find a partner to help you.  Embrace innovation and find a partner to help you.  JD Edwards is here for the long haul, use it’s strengths and augment and extend to the cloud – pugging in AI, ML and other megatrends to improve your decision making capabilities.  Configure your ERP, don’t modify it.


Hacky post... cafe1 bulk rename / change of attributes

$
0
0
Once again, it takes me a while to get to the meat of the post, but I want to set the scene of what has been done so that I can finally describe the work around.

I've recently been involved with integrating sharepoint with JD Edwards for media object storage. 

This is a fairly complex solution, but it's highly strategic.

This is another way of reducing the need for the deployment server and also being more efficient on how media objects work.  If you can store all of your media objects in SharePoint (@sharepoint.com) too, then you do not need to worry about backups and recovery and stuff and stuff.  You can then think about writing some cool power apps that might feed from the media objects (scanned images for example), and drill back to JDE - that'd be nice.

There are 3 major deliverables for their piece of work:

  1. Historical media object conversion
  2. Ad hoc upload of files from JDE to sharepoint
  3. display context sensitive media objects to end user
1 is easy, we used a powershell script that uploads all of the physical files to sharepoint.  We created some different directories and put the objects in relevant folders in SharePoint.

2. Slight challenge, but there is an existing UBE that uploads all of the scanned images.  This was modified slightly (caressed) to call a powershell script that too, uploads the file immediately to sharepoint and then writes an F00165 records (type 5) for consumption down the track.

3.  Finally - the hardest bit.  Displaying the MO's natively in E1.  This was a bit of a challenge.  We needed to write an e1page that would take a couple of parameters (GDTXKY GDOBNM) and do an AIS call to find the relevant MO's and then display this content.  EASY!!  NO, it was not.  This was difficult because of the cross domain content rules using sharepoint online.  You cannot natively display the contents of sharepoint in an iframe.  A REAL PAIN!

Azure functions to the rescue.  The team wrote an azure function that takes the URL for the attachment as a parameter and actually sends back the binary representation of the file.  So that we get around all of the cross domain security problems.  This was painful, but at the end of the day it works.  Once this e1page has been created, all you need to do is hook up cafe1's to reference the e1page and pass in the data from the forms to generate the GDOBNM and the GTTXKY

This shows flexibility in assigning the MO window URL

This is configuration NOT code... Actually think about the solution above...  It's basically all configuration in JDE, not code.  We are using lots of UDOs to get this working.  If you give your users access to the e1page and also access to the cafe1's - they can all see their PDF's in the cafe1 window...  Oh - and they do not need to download them first.


Okay, so now we are at the point where I'm describing my problem.  I created 27 e1pages that reference the test website jdepd920.client.com, but this is going to change to jde.client.com on the go-live weekend.  I do not want to edit 27 UDO's to cater for this.

So, SQL to the rescue.  Clever JDE store the UDO's as an XML document in the BLOB.  Don't bother trying to hack the e1page on the server, that is going to be replaced all of the time.  You need to hack the e1page definition in F952450.


--backup
select * into JDE_DV920.DV920.F952450SRM from JDE_DV920.DV920.F952450 ;

--update
UPDATE JDE_DV920.DV920.F952450
set WOOMRBLOB = replace(convert(varchar(max) , convert (varbinary (max) , WOOMRBLOB)),'jdedv920.client.com.au', 'jdedv.client.com.au') 
where UPPER(convert(varchar(max) , convert (varbinary (max) , WOOMRBLOB))) like '%JDEDV920%'
--WHERE WOWOOBNMS like '%MO%';

Great, so I was just able to update the URL for 27 UDOs (and any personal copies) with one statement.  This would have taken me a long time to reserve them all, edit, share  etc.
Native sharepoint window / attachment without the user doing ANYTHING.  No clicking paper clip and no downloading PDFs.
 Normal warnings, you need to be very careful changing this sort of stuff.  Test in lower environments too!




hacky post 2–bulk promotion of UDOs (specifically cafe1)

$
0
0

This really follows on from the last post (not the emotive Aussie song played on ANZAC day), but my last post about UDO’s and renames.

This involves synchronisation of UDO’s that are cafe1 screens that point to an E1page.

As you know, if you promote and e1page (or create it again), it’ll have a different URL – depending on the environment that you are signed into.  They cannot work with a relative path (HEY – JDE enhancement idea)!  So, if you have a e1page that does some nice AIS lookups and renders some cool information, there is a bit of admin getting this promoted.

So, captain hack comes to the rescue.

First, we are dealing with F9861W (for pathcode related UDO records), they are seen in P98220u in JDE.  These are actually pointers to F952450, which are the actual UDO’s – there is a blob field.

below is the view of P98220u for cafe1 (composite app framework if you are posh)image

When looking at the BLOB, we can work out what we need to change:

select convert(varchar(max) , convert (varbinary (max) , WOOMRBLOB)) from JDE_PY920.PY920.F952450
where UPPER(convert(varchar(max) , convert (varbinary (max) , WOOMRBLOB))) like '%MYDEMO.FUSION5.COM%';

Which reveals something like the following – the actual cafe1 definition:

--<?xml version="1.0" encoding="UTF-16"?> <RIAF CATALOGMAX="0" VERSION="1"><CONTAINER CONTAINER_TYPE="2"         LAYOUT_TYPE="1" MODE="0" PERCENTAGE="100.0" PREFER_OBJECT_ID=""             WIN_STATE="0"><CONTAINER CONTAINER_TYPE="1" LAYOUT_TYPE="2"             MODE="0" PERCENTAGE="71.15" PREFER_OBJECT_ID=""             WIN_STATE="0"/><CONTAINER CONTAINER_TYPE="2" LAYOUT_TYPE="2"             MODE="0" PERCENTAGE="28.85" PREFER_OBJECT_ID="1534329490322"         WIN_STATE="0"/></CONTAINER><CONTENT ID="1534329490322"><OBJECTTYPE>GENERICURL</OBJECTTYPE><TITLE><TABNAME>View MO Attachment</TABNAME><TABDESCRIPTION>View MO Attachment</TABDESCRIPTION></TITLE><CONTENTDATA><PR_PATH_TYPE>0</PR_PATH_TYPE><PR_TEMPLATE>https://mydemo.fusion5.com/jde/e1pages/E1P_1808150001CUST_55/home.e1page?struct=AB&amp;key=100
--</PR_TEMPLATE><PR_PARTS><PR_PART><PR_INDEX>11</PR_INDEX><PR_DESCRIPTION>struct</PR_DESCRIPTION><PR_COMPONENT_LIST><PR_CUSTOMIZED_TEXT_COMPONENT>GT0411S</PR_CUSTOMIZED_TEXT_COMPONENT></PR_COMPONENT_LIST></PR_PART><PR_PART><PR_INDEX>13</PR_INDEX><PR_DESCRIPTION>key</PR_DESCRIPTION><PR_COMPONENT_LIST><PR_CONTROL_COMPONENT>GC0_1.80</PR_CONTROL_COMPONENT><PR_SEPARATOR_COMPONENT>|</PR_SEPARATOR_COMPONENT><PR_CONTROL_COMPONENT>GC0_1.27</PR_CONTROL_COMPONENT><PR_SEPARATOR_COMPONENT>|</PR_SEPARATOR_COMPONENT><PR_CONTROL_COMPONENT>GC0_1.26</PR_CONTROL_COMPONENT><PR_SEPARATOR_COMPONENT>|</PR_SEPARATOR_COMPONENT><PR_CONTROL_COMPONENT>GC0_1.47</PR_CONTROL_COMPONENT></PR_COMPONENT_LIST></PR_PART></PR_PARTS></CONTENTDATA><KEYCTRLST/><KEYVALLST/><KEYALIALST/><URLUSER>SHANNONM</URLUSER><USER>SHANNONM</USER><UPMJ>2018-08-15</UPMJ><UPMT>212531</UPMT></CONTENT><TITLECATALOG><TITLE><TABNAME>View MO Attachment</TABNAME><TABDESCRIPTION>View MO Attachment</TABDESCRIPTION><CONTENTID>1534329490322</CONTENTID></TITLE></TITLECATALOG></RIAF>

Nice, so again, the power of SQL I can actually change this without checking out the UDO, making the change and checking it back in.

create a backup file of what you want to change

SET IMPLICIT_TRANSACTIONS ON ;

select * into JDE_PD920.PD920.F952450SRM from JDE_PD920.PD920.F952450 ;

commit;

UPDATE JDE_PD920.PD920.F952450SRM
set WOOMRBLOB = replace(convert(varchar(max) , convert (varbinary (max) , WOOMRBLOB)),’FROMVALUE.com.au', ‘TOVALUE.com.au')
where UPPER(convert(varchar(max) , convert (varbinary (max) , WOOMRBLOB))) like '%FROMVALUE.COM%';

commit;

okay, now we have modified all the UDO’s, we can put them into another environment!  I’m inserting into PY from PD – after changing those URLS

insert into jde_py920.py920.F952450 select * from pjdesql01.JDE_PD920.PD920.F952450SRM
where UPPER(convert(varchar(max) , convert (varbinary (max) , WOOMRBLOB))) like '%TOVALUE.COM%';

commit;

But they are not in P98220U – What?

Need to do P9861W

select * from pjdesql01.jde920.ol920.f9861W
where SIPATHCD = 'PD920'
--and SIWOBNM like 'CAF%'
and SIUSER = 'SHANNONM';

select * into JDE_PY920.PY920.f9861WSRM
from pjdesql01.jde920.ol920.f9861W
where SIPATHCD = 'PD920'
and SIWOBNM like 'CAF%'
and SIUSER = 'SHANNONM';

select * from JDE_PY920.PY920.f9861WSRM ;

update JDE_PY920.PY920.f9861WSRM set sipathcd = 'PY920' ;

insert into pjdesql01.jde920.ol920.f9861W select * from JDE_PY920.PY920.f9861WSRM;

drop table JDE_PY920.PY920.f9861WSRM;

Done!  You’ll see that my selection criteria is specific for my use case, you could join F9861W to F952450 to get all of the items that you need.

How many update packages are too many?

$
0
0

I love blaming system stability problems on the old “too many update packages”.  How credible is this in the modern JD Edwards environment, from my experience – not very.

I agree that it used to have weight, but honestly – I do not think that you are going to have stability issues because of too many update packages, look at this screen shot:

image

This is a small font, but we actually have 129 update packages – any everything is working fine.

We all need to remember that we are not dealing with spec files anymore, so the old spec file corruption is not a problem.

I must admit, I think that 129 is too many – I’d tidy this up with a full – but it’s not causing any problems.  The correct mods are being deployed and are working.

Anyone else got an opinion?

PS.  I’m nerdily excited about 9.2.3 being released.  I even checked the update centre this morning and found nothing…  Waiting…

A JDE Mobile application that you can nerd love?

$
0
0
Do you have the acronym CNC in your resume?  
Do you own a mobile phone?  

Then this post is for you!

My amazing team at Fusion5 have created the first release of our JDE server manager mobile application.

You should be able to find it on the playstore soon, watch this space.

When we are live, you just search for Fusion5 mobile manager.

It's plugging into the rest services from server manager and giving you a nice mobile interface to do something with them.

Have you had a client (or users) ring you and say they've just changed the period or changed some company constants and you need to put your pint down and login on the laptop - no need to do this anymore!

Just pull your phone out and clear the cache.

We hope to update this with more down the track - starting and stopping and user counts.


JDE Clear cache, look for the big 5!
it remembers the last server and the cache

you configure your SM link, either internal or external, your MDM can work out the VPN if needed

Choose the servers that you want to reset cache for.  You can see the status too

Choose the cache

Tells you that things are okay

Augmented with ERP analytics, you can get the following view too
ERP analytics information available for how many users are currently logged in and the performance that they are getting on average.  This is JDE users that have been active in the last 5 minutes.

We can see the devices that ppl are using and also the pages that they are running on the mobile.

Look at the busy times of day to see when JDE is getting used the most.



Actionable insights from your JDE usage data

$
0
0

I’ve blogged about ERP analytics a lot, I know – it gets a bit boring.  But I’m trying to change that.

Fusion5 have been working on the next level of insights using Google data studio – which is NEXT level in terms of reporting!

Take a look at this:


image

We are able to sort access by system code and tell clients what modules they are using the mod, and also look at individual scatter diagrams of page load times vs. page views – always looking for the outlying dots – what is going wrong?


image

The above show number of sessions mapped against page views for a period of time.  We look out for user ID sharing for those users that are logging in 100’s of times for a 24 day period!


image

We can finally sort out the debate about the best browser, seems that chome is the fastest one at this site, well – looking at the last 1.3 million page loads!

Although this is only the beginning.

We’ve written a custom connector that allows us to connect to the data from within JD Edwards – from data studio.  We are using AIS server data requests for this.

How?  This is how

1.  connect to JD Edwards using the custom JDE connector:

image


Then complete the details in the helper

image

And now you can see all of the fields for the table you chose:

image

You can choose the fields that you want on the report.
image

Great, now we start reporting

image

We can drag and drop data to get a dashboard view of the data in JDE – Awesome.

Here is some hard work that I did on the W@SJ tables:

image

So we can see what jobs are being run, what queues they are running in and also a scatter chart of rows processed vs average runtime, this is very helpful when determining performance.  You can choose to view this data by queue, user or job if you need to.


If you want a beta copy of our connector – get in contact!

ERP analytics–what is the performance impact?

$
0
0


I have a number of clients asking about the performance impact of ERP analytics.  What is ERP analytics a comprehensive suite of reports over the top of JD Edwards usage information.  Giving system administrators instant feedback to what is being used, how often and how fast.  You know this is you read my blog.

People get a little nervous about the effects that this might have on interactive performance, so I thought that I might allay those fears, with some data.

This is an example of me logging into JD Edwards and pulling up 5 applications and then closing them again, this is good for ERP analytics, some of them have auto-find and therefore you get an idea of the database performance and sometimes the application server performance.

This is easy too, we can give a list of applications that have auto find and compare these with applications that do not, therefore when the database is slow – these are all affected more!  cool hey?  This can all be done with a custom segment too, for example:

So based upon my basic segment definition to have a list of forms and apps that have autofind enabled (actually best to use FORM, it’s more accurate)

image

I can see the relative performance of the “autofind” screens vs. standard screen, which allows to be narrow down those that are affected more by database slow downs. 

Back to my post…

So I want to know the impact of google analytics on my ERP, I’ve got it plugged in and enabled and now I can enable developer view in chrome, this gives me some really cool stats on what is going on under the covers.

I can see that on a session that went for 2.2 minutes or approximately 132 seconds, we are sending 1.9KB (of 332KB), we waited in TOTAL for google analytics 955ms – under a second.  This has loaded and closed 5 applications and logged in and out.  So there is a lot of activity for  less than 1 second (less than 1%) delay.  Great news.


image

Remember the insights that you can gather out of this data, which actually get more valuable the longer that you have it enabled.

JDE licence audit (License audit)–where to begin?

$
0
0

I did steal this from a post I did on linked in, but I can paste better images here!

Even the grammatical rules are difficult with with word (licence), in Australia for example - when using the term licence as a noun (software licence), we spell with a couple of C's. In the US of A, things are considerably easier - only spelt with one C, license.

Are you worried about JD Edwards licence audits? Worry no more. We give you peace of mind, and allow you to easily understand what programs, modules and user activity in JD Edwards is active - and therefore allow you to understand how you sit from a licence and compliance point of view.

When is a seat not a seat? When you are talking about a JD Edwards licence! Do you know whether you need a licence is someone is just browsing the information in another system code? For example, if a user is licenced for the JD Edwards Sales Order Management (system code 42), and they look at item availability (system code 41) - do you need an inventory management licence for this? The price list certainly contains details of the separate price codes (price list). What have I been told? It's complicated. Various people have told me that you do not need a licence, and others the opposite - all these people were from oracle.

If you accept transaction licensing: (you make updates in the module).

You need to determine your counts by looking for all of the tables that are within that module (say for sales [42], find all of the tables that are system code 42 -and get a distinbct count of users that have updated records in any of these tables. You can generally do this by looking for user fields and also looking at the last transaction date. This would allow you to group by month and determine a unique list of users that have updated records in the system code.

If you do not want to do this yourself (it's trivial SQL - but will take time due to lack of indexes), there are products out there that can assist. qsoftware for example have done the heavy lifting to map tables to licenced modules and can give you some nifty reports.

Though I do caution you to be aware of multiplexing. If you are simply using 3rd party software to do the work of a JDE user, you might get caught. Read this carefully

Named User Plus: is defined as an individual authorized by you to use the programs which are installed on a single server or multiple servers, regardless of whether the individual is actively using the programs at any given time. A non human operated device will be counted as a named user plus in addition to all individuals authorized to use the programs, if such devices can access the programs. If multiplexing hardware or software (e.g., a TP monitor or a web server product) is used, this number must be measured at the multiplexing front end. Automated batching of data from computer to computer is permitted. You are responsible for ensuring that the named user plus per processor minimums are maintained for the programs contained in the user minimum table in the licensing rules section; the minimums table provides for the minimum number of named users plus required and all actual users must be licensed.

If you use the access based licensing (you use it, you pay)

That is to say, if a program is used by a user - you need a licence for said use.

You have two options here, you can use object usage tracking functionality (if enabled), which you can extract this information with SQL and perhaps some standard reports.

Or, you can use some intuitive reporting out of our ERP analytics package, showing your exactly how many users are logging in and exactly what they are doing. See more details here. These intuitive reports can show you things like:

  • Active users per module (per day / week or month)
  • image

The report above shows a high level of application usage per system code.  This allows you to track back to your licence agreements and work out what systems codes you are licenced for.  Note that it can sell you how many unique applications are being used, how many distinct users are using it and how many sessions have been recorded.  This is awesome for working out what your ERP is actually doing.

  • Active programs per module - how deep is your footprint
  • image

The above report is a different take on the data, but starts to include application name.  This is good to know all of the apps that you are using wihtin system codes.  Every report can be exported to excel –simple (see below).  You can also see by system code how many users and how many distinct programs in the graph.

image


  • Active modules per user - knowing what your users need when you get new ones.

image

This report shows you how many modules and how many applications each of your users are using.  This is good for knowing the complexity of each user and comparing them.  Also handy if you need more users, you’ll know what the licence impacts are.

image

The above report is basically for export .  You can see users and applications.  For the date range and environment you supply, you can see what users have done in that period of time.  This includes now many times they have logged in (sessions) and active days of use of that program as well as how many times the program has been used.  This is really good to know what you might be able to take away.


  • historical user access, month on month and year on year (when ERP analytics subscription is active)
In summary

Talk to your partner (Fusion5 perhaps) to understand how you are going to ensure that you are not currently breaching your ERP licence requirements. We have a multitude of interactive reporting options so that you can understand exactly what you need and exactly what you are currently using.

Contact me directly if you want a demo or want to plug this into your ERP. 


Advanced UBE performance analysis for JD Edwards

$
0
0
I'm finally starting to close the loop on something that has interested me for a long time, that is batch performance from a holistic point of view.  This is having granular statistical information on a job by job basis, but also having the ability to do macro analysis.

I've done lots of posts over time (search for F986110, F986114) on using advanced database queries to view jobs and data.  Today I'm going to post on something much more intuitive and visually attractive for batch performance analysis.  

Today I want to show you how I've used data studio from google to slice and dice UBE job performance data to show at a high level how things are running and give you the ability to drill down on very specific data.


Sample UBE batch dashboard, showing rows processed, execution time and # of times run


This has been a challenge for a long time.  I've seen various plugins and attempts, but I think what Fusion5 have put together is pretty neat.  Please watch the video at the bottom of the screen as a demonstration of the types of operations you can do over your batch information.

I guess there are a couple of key points:

loading the data

There are two mechanisms for doing this,
  1. public facing AIS server and our bespoke view over F986114 & F986110 for easy insights
The fusion5 connector using AIS

Easy configuration of connector to report live on JDE data






  1. Fusion5 provide you with a SQL statement that produces the exact csv of information and we upload this for you into a datasource, which you can use with the existing reports
    1. Once you've seen this done, you'll be able to self serve really easily!

Viewing the data

It's easy to create reports and even easier to view the data using our pre-built reports.  I showed a teaser above, but here are a couple of additional screen shots.

Advanced filtering capabilities

Look for data points that are not normal, filter by user, job, job queue or execution host - you decide.
All of the reports that you see here are interactive, so as you hover over data points, you get feedback that is relevant to the graph being displayed.  One of the really nice features is the ability to drill down into the actual UBE runtime detail directly from the dashboards.  You'll see this in the video.

Execution detail available directly from the dashboard.

Be nice, this is one of the first video's that I've embedded.  This shows some of the filtering and advanced functionality that we are catering for.



Want some, you need to get in contact - fusion5.com.au and do some searching on first and last names...  good luck!

A post a day keeps the blog police away

$
0
0

I’m trying to get a bit more information out around continuous innovation and applying this to JD Edwards. 

Fusion5 are working hard on having a single console to allow people to report on their site, their modifications and enable to see how many actual objects need to be updated to get them code current.  This is the easiest way of getting up to date.

To get to a single “pane of glass view”, we’ve done a lot of development on the side of JD Edwards (and in JD Edwards).

  • ERP analytics to measure engagement and ERP usage – know what you use and what needs updating / what needs to be tested
  • data studio to understand graphically UBE usage and statistics to further augment the above to a 360 degree view of a clients modifications.
  • Form compare, an AIS based utility that can compare all of the controls on any form (between environments and even releases) – this will catch DD changes, vocab overrides and more – great for seeing what has changed on a form.  You can also use this for testing security!
  • modification complexity matrix, looking over your OMW actions and understanding how complex your modifications actually are.
  • object code hashing.  We have developed custom algorithms to create a hash of all objects in JD Edwards, with this information we can unequivocally compare objects between releases, environments and pristine – not just rely on date and time stamps.  This is a massive piece of work and it’s going to allow us to understand client sites at a whole new level.  We can use this information to compare with what is being released in the update centre, allowing clients to know EXACTLY how modified they are.  Combine this with the ERP analytics information above and you’ll also know your testing and updating strategies.
  • We are working on some really cool green / blue deployments on AWS, burning staging AMIs for JDE and deploying these with some fancy session draining.  This is going to allow us to provide uninterrupted access to JDE, while pushing out ESU’s and updates continuously.  This in itself is an amazing step forward for agile deployments, and sits on the shoulders of all of the advancements that you see above.
  • We are using OATS for auto testing and have written some pretty neat additional software items that allow for better testing of JD Edwards forms from an interactive perspective.


Really we want to look towards all of the advancements that we may be able to do, to get JDE into a CI/CD pipeline, as below:


image

This is precisely what Fusion5 are trying to do at the moment, make sure that JDE can do as much as possible to fit into this new paradigm.  (new for JDE).

We are working on many of the pieces of this puzzle for our clients. 


I guess now you have seen the big picture, and organisational goal for us to slip JDE into a Continuous Integration / Continuous Delivery pipeline – and all of the unique things we are doing in this space to make it as automated as possible.

Today’s video is foundational in terms of how you can start to understand your users and your ERP better by plugging in google analytics (ERP analytics).

64 bits, not butts!

$
0
0
I've been to Denver and chatted to the team about 64 bit, and they are all pretty nonchalant about the process.  Very confident too, as we all know it's been baked into the tools for some time, just getting it into the BSFN's.

Honestly though, how many of your kernels or UBE's need to address more than 2GB of RAM (or 3GB with PAE blah blah), not many I hope!  If you do, there might be some other issues that you have to deal with first.

To me it seems pretty simple too, we activate 64 bit tools and then build a full package using 64 bit compile directives.  We then end up with 64bit pathcode specific dll's or so's and away we go.

The thing is, don't forget that you need to slime your code to ensure that it is 64bit ready, what does this mean?  I again draw an analogy between char and wchar, remember the unicode debacle?  Just think about that once again.  If you use all of the standard JDE malloc's and reallocs - all good, but if you've ventured into the nether-regions of memory management (as I regularly do), then there might be a little more polish you need to provide.

This is a good guide with some great samples of problems and rectifications of problems, quite specifically for JDE:
https://www.oracle.com/webfolder/technetwork/tutorials/jdedwards/White%20Papers/jde64bsfn.pdf

In the simplest form, I'll demonstrate 64 bit vs 32 bit with the following code and the following output.

#include
int main(void)
{
  int i = 0;
  int *d ;
  printf("hello world\n");
  printf("number %d %d\n",i,sizeof(i));
  d=&i;
  printf("number %d %d\n",*d, sizeof(d));
  return 1;
}

giving me the output

[holuser@docker ~]$ cc hello.c -m32 -o hello
[holuser@docker ~]$ ./hello
hello world
number 0 4
number 0 4
[holuser@docker ~]$ cc hello.c -m64 -o hello
[holuser@docker ~]$ ./hello
hello world
number 0 4
number 0 8

Wow - what a difference hey?  Can't get 32 bit to compile, then you are going to need to run this as root:

yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686 ncurses-devel.i686 --setopt=protected_multilib=false

The size of the basic pointer is 8 bytes - you can address way more memory.  This is the core of the change to 64 bit and everything flows from the size of the base pointers.

Basically, the addresses are 8 bytes, not 4 - which changes arithmetic and a whole heap of down stream things.  So when doing pointer arithmetic and cool things, your code is going to be different.

The sales glossy is good from oracle, I say get to 64 if you can.

1.     Moving to 64-bit enables you to adopt future technology and future-proof your environments. If you do not move to 64-bit, you incur the risk of facing hardware and software obsolescence. The move itself to 64-bit is the cost benefit.
2.     Many vendors of third-party components, such as database drivers and Java, which JD Edwards EnterpriseOne requires, are delivering only 64-bit components. They also have plans in the future to end or only provide limited support of 32-bit components.
3.     It enables JD Edwards to deliver future product innovation and support newer versions of the required technology stack.
4.     There is no impact to your business processes or business data. Transitioning to 64-bit processing is a technical uplift that is managed with the JD Edwards Tools Foundation.

This was stolen directly from https://www.oracle.com/webfolder/technetwork/tutorials/jdedwards/64bit/64_bit_Brief.pdf
  
Okay, so now we know the basics of 64 vs 32 - we need to start coding around it and fixing our code.  You'll know pretty quick if there are problems, the troubleshooting guide and google are going to be your friend. 

Note that there are currently 294 ESUs and 2219 objects that are related to BSFN compile and function problems - the reach is far.

These are divided into the following categories:


So there might be quite a bit of impact here.

Multi foundation is painful at the best of times, this is going to tough if clients want to do it over a weekend.  I recommend new servers with 64 bit and get rid of the old ones in one go.  Oracle have done some great work to enable this to be done gradually, but I think just bash it into prod on new servers once you have done the correct amount of testing.

This is great too https://docs.oracle.com/cd/E84502_01/learnjde/64bit.html






real-time session information for ALL your JDE users

$
0
0

This post is based upon another yourtube clip, which explains the kind if realtime information that you can extract from JD Edwards using ERP analytics.

Of course, this is just google analytics with some special tuning which is specific to JDE.

This clip shows you how you can see actual activity in JDE, not just server manager – people logged in.  What I find is that the actual load on the system has nothing really to do with what SM reports.  SM reports some artificially high numbers – those which have not timed out.  This can include many hours of inactivity.  What GA (Google Analytics) reports on is those which have interacted with the browser in the last 5 minutes.  It also gives you realtime pages per minute and pages per second.  Sometimes I wonder how you can run a site (or at least do load testing) without some of these metrics.  I often see 120 people in server manager and 35 people online with GA.

Anyway, enjoy the vid – if you have questions, please reach out.


JDE scheduler problems

$
0
0
Who loves seeing logs like this for their scheduler kernel?

108/1168     Tue Dec 11 21:49:02.125002        jdbodbc.C7611
       ODB0000164 - STMT:00 [08S01][10054][2] [Microsoft][SQL Server Native Client 11.0]TCP Provider: An existing connection was forcibly closed by the remote host.
108/1168     Tue Dec 11 21:49:02.125003        jdbodbc.C7611
       ODB0000164 - STMT:01 [08S01][10054][2] [Microsoft][SQL Server Native Client 11.0]Communication link failure

108/1168     Tue Dec 11 21:49:02.125004        JDB_DRVM.C998
       JDB9900401 - Failed to execute db request

108/1168     Tue Dec 11 21:49:02.125005        JTP_CM.C1335
       JDB9900255 - Database connection to F98611 (PJDEENT02 - 920 Server Map) has been lost.

108/1168     Tue Dec 11 21:49:02.125006        JTP_CM.C1295
       JDB9900256 - Database connection to (PJDEENT02 - 920 Server Map) has been re-established.

108/1168     Tue Dec 11 21:49:02.125007        jdbodbc.C2702
       ODB0000020 - DBInitRequest failed - lost database connection.

108/1168     Tue Dec 11 21:49:02.125008        JDB_DRVM.C908
       JDB9900168 - Failed to initialize db request

Who loves spending the morning fixing jobs from the night before and moving batch queues and UBE's until things are back to normal?  Noone!

Here is something that may help, not I must admit I gotta thank an amazing colleague for this, not my SQL - but I go like it.

What you need to do is write a basic shell script (say that was on the ent server) that runs this:

select count (*) from SY910.F91300
    where SJSCHJBTYP = '1'
    and SJSCHSTTIME > (select
                      ((extract(day from (current_timestamp-timestamp '1970-01-01 00:00:00 +00:00'))*86400+
                        extract(hour from (current_timestamp-timestamp '1970-01-01 00:00:00 +00:00'))*3600+
                        extract(minute from (current_timestamp-timestamp '1970-01-01 00:00:00 +00:00'))*60+
                        extract(second from (current_timestamp-timestamp '1970-01-01 00:00:00 +00:00')))/60)-60 current_utime_minus_1hour
                        from dual);

If you get a 1 that is good, if you get 0 that is bad.  You probably need to recycle your scheduler kernel  (that control  record should change every 15 mins at least).

So, if you have a script that runs that, you can tell if the kernel is updating the control record...

Then you can grep through the logs to find the PID of the scheduler kernel and kill it from the OS.  Then I write a little executable that gives the scheduler kernel a kick in the pants (start a new one) - and BOOM!  You have a resiliant JD Edwards scheduler.




Viewing all 541 articles
Browse latest View live