Quantcast
Channel: Shannon's JD Edwards CNC Blog
Viewing all 542 articles
Browse latest View live

fixing fat clients... again

$
0
0
Why am I still doing this.  Fixing fat clients is not a lot of fun.  Here is a couple of tips for connecting to the database using NTS and the correct sqlplus executable.  And then fixing an import that has gone bad.

I feel that I should also point out that I'm using an AWS workspace as my thick client.  (full disclosure).  There seem to be some fairly large IOPs issues using this config and perhaps this is why I'm having issues with the JDE thick client installation "out of the box".  But, I do persist...

I tried to solve an easy problem today, saw in an AIS log, evidence of a form that has gone bad…

15 Jul 2019 14:44:14,137[WARN][SMOIR][RUNTIME]CheckBoxEngine.initForDDInfo(): There is no Data Dictionary item associated with this check box. The value may be incorrect | CheckBox ID: 24, Form Name : P55ACCAM_W55ACCAMG Corrective Action :Please associate a Data Dictionary Item with the check box

Okay, fat client, get into designer.

Not as easy as I thought.

Could not log into DV:
Loads of errors about cannot find spec__blah.

19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:41.679000jdb_ctl.c4208Starting OneWorld
19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:46.515000jdecsec.c2873Security Server returned error: eSecInvalidPassword: Invalid Password
19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:46.515001jdecsec.c308Failed to validate user SMOIR by password
19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:46.515002jdb_ctl.c4865JDB1100018 - Failed to get past Security check
19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:48.263000msc_signon.cpp184ValidateUser failed from SignOn
19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:51.694000dbcolind.c141OCI0000017 - Unable to execute statement for describe - SELECT  *  FROM SPEC_DVB81210F.F98710DVB81210F  WHERE  ( THOBNM = :KEY1 )
19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:51.694001dbcolind.c148OCI0000018 - Error - ORA-00942: table or view does not exist 19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:51.694002dbinitrq.c1009OCI0000143 - Failed to determine column order - SELECT  *  FROM SPEC_DVB81210F.F98710DVB81210F  WHERE  ( THOBNM = :KEY1 )
19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:51.694003dbinitrq.c1016OCI0000144 - Error - ORA-00942: table or view does not exist 19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:51.694004jdb_drvm.c908JDB9900168 - Failed to initialize db request
19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:51.694005JTP_CM.c1009JDB9909007 - Unable to obtain driver request handle
19116/7984 MAIN_THREAD                       Mon Jul 15 17:34:51.694006jdb_rst.c1779JDB9900318 - Failed to find table information in TAM using RDB


This is always a dead give-away

Error Opening F98MOQUE Table.

Then you get the locked up menu design






Okay, I’ll try a new PY package.

I installed a new package, and got the same problems.  Hmm, that is annoying.  Take a look in sqlDeveloper.

Change sqlnet.ora in D:\Oracle12c\E1Local\NETWORK\ADMIN to have NTS auth

# Generated by OEESETUP.EXE
SQLNET.AUTHENTICATION_SERVICES=(NTS)
NAMES.DIRECTORY_PATH=(TNSNAMES)


run the correct sqlplus, "where sqlplus"

D:\Oracle12c\E1Local\BIN>where sqlplusD:\Oracle12c\E1Local\BIN\sqlplus.exeD:\oracle12c\product\12.2.0\client_1\bin\sqlplus.exe

Cd D:\Oracle12c\E1Local\bin

D:\Oracle12c\E1Local\BIN>.\sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jul 15 16:56:56 2019

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions

SQL> create user jdedba identified by jdedba11 ;

User created.

SQL> grant dba to jdedba ;

Grant succeeded.

SQL> grant create session to jdedba ;

Grant succeeded.

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64
bit Production


SQLDeveloper showed me that the package owner did not exist…   Though, my DV package does not work because there are no tables... wow, this is all over the place.

okay – must have been errors in the install:

C:\jdeinst.log

JD Edwards EnterpriseOne Client Install Log

Look in the log C:\Program Files (x86)\Oracle\Inventory\logs\installActions2019-07-15_04-18-42PM.log for more information.

Congratulations

The above file is the juicy one, but EVERYTHING told me the install was success…  except something here is fishy…


CMD: sqlplus.exe -S -L
INP: SYSTEM@E1Local
INP: ******
INP: ALTER TABLESPACE SPEC_PYC90501F READ WRITE ;
INP: EXIT
OUT: ALTER TABLESPACE SPEC_PYC90501F READ WRITE
*
ERROR at line 1:
ORA-00959: tablespace 'SPEC_PYC90501F' does not exist

So, find the commands in the install log and run them again, namely – sqldeveloper (logged in as jdedba user you created)

create user SPEC_PYC90501F identified by HELLO;
GRANT CREATE SESSION, ALTER SESSION, CREATE TABLE, CREATE VIEW TO SPEC_PYC90501F ;

Then copy Spec files from the spec dir to the data dir?  Don’t ask me:

Then the big one at the command line:

D:\Oracle12c\E1Local\BIN>impdp TRANSPORT_DATAFILES='D:\E920\PY920\spec\spec_pyc90501f.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_pyc90501f.dmp' REMAP_TABLESPACE=SPEC__PYC90501F:SPEC_PYC90501F REMAP_SCHEMA=SPEC__PYC90501F:SPEC_PYC90501F LOGFILE='impspec_pyc90501f.log'

Import: Release 12.1.0.2.0 - Production on Mon Jul 15 17:06:45 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: jdedba
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit
Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions
Master table "JDEDBA"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded

Starting "JDEDBA"."SYS_IMPORT_TRANSPORTABLE_01":  "jdedba/********" TRANSPORT_DA
TAFILES='D:\E920\PY920\spec\spec_pyc90501f.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_
pyc90501f.dmp' REMAP_TABLESPACE=SPEC__PYC90501F:SPEC_PYC90501F REMAP_SCHEMA=SPEC
__PYC90501F:SPEC_PYC90501F LOGFILE='impspec_pyc90501f.log'
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "JDEDBA"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Mon Jul 15
17:07:42 2019 elapsed 0 00:00:49


Done!

sqlplus one more time:
ALTER TABLESPACE SPEC_PYC90501F READ WRITE;


Now I can log into JDE – painful though





sqlplus commitment issues

$
0
0
What happens when you exit sqlplus without using a commit statement.  It does a rollback, right?  Wrong!

We’ve been doing some scripting in SQLPlus and I was asked this question, and I gave a confident reply - of course it rolls back.  And then decided to test this.

My assumption was if you did not commit, your transactions would ROLLBACK.  It seems I was totally wrong!!
  

$ sqlplus jde@jdeprod

SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 24 15:21:50 2019
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Enter password:
Last Successful login time: Wed Jul 24 2019 15:21:43 +10:00
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
SQL> create table shae (username varchar(20)) ;
Table created.
SQL> insert into shae values ('nigel') ;
1 row created.
SQL> quit
Disconnected from Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

I would think that now, this record would not exist…  but as you can see, after a standard quit (or exit), a commit is issued!

$ sqlplus jde@jdeprod
SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 24 15:24:43 2019
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Enter password:
Last Successful login time: Wed Jul 24 2019 15:24:38 +10:00
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
SQL> select count(1) from shae ;
  COUNT(1)
----------
         1
SQL> select * from shae ;
USERNAME
--------------------
nigel
SQL> update shae set username = 'ralph' ;
1 row updated.
SQL> quit

Holy moly!
You need to do this if you want to rollback, specify it in the exit command:

SQL> update shae set username = 'testing';
1 row updated.
SQL> exit rollback ;
Disconnected from Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
$ sqlplus jde@jdeprod
SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 24 15:59:10 2019
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Enter password:
Last Successful login time: Wed Jul 24 2019 15:59:05 +10:00
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
SQL> select * from shae
  2  ;
USERNAME
--------------------
shannon

And the supporting documentation


EXIT
Syntax
{EXIT | QUIT} [SUCCESS | FAILURE | WARNING | n | variable | :BindVariable] [COMMIT | ROLLBACK]
Commits or rolls back all pending changes, logs out of Oracle Database, terminates SQL*Plus and returns control to the operating system.
In iSQL*Plus, commits or rolls back all pending changes, stops processing the current iSQL*Plus script and returns focus to the Input area. There is no way to access the return code in iSQL*Plus. In iSQL*Plus click the Logout button to exit the Oracle Database.
Commit on exit, or commit on termination of processing in iSQL*Plus, is performed regardless of the status of SET AUTOCOMMIT.

find and kill... that's harsh. find problematic IO intensive oracle operations and prevent them causing too much carnage...

$
0
0
This is a continuation of my IOPs challenges.

This is a non DBA's cheat sheet for finding IO in a standard oracle database

Good to find out what queries are smashing the disk:

select
   p.spid,
   s.sid,
   s.serial#,
   s.process cli_process,
   s.status,t.disk_reads,
   s.last_call_et/3600 last_call_et_Hrs,
   s.action,
   s.program,
   t.sql_fulltext
from
   v$session s,
   v$sqlarea t,
   v$process p
where
   s.sql_address = t.address
and
   s.sql_hash_value = t.hash_value
and
   p.addr = s.paddr
-- and
--t.disk_reads > 10
order by
   t.disk_reads desc;

PID                            SID    SERIAL# CLI_PROCESS              STATUS   DISK_READS LAST_CALL_ET_HRS ACTION                                                           PROGRAM                                          SQL_FULLTEXT                                                                   
------------------------ ---------- ---------- ------------------------ -------- ---------- ---------------- ---------------------------------------------------------------- ------------------------------------------------ --------------------------------------------------------------------------------
2924                           3880      15998 1234                     INACTIVE   48832423           4.4775                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
4939                           7678       8934 1234                     INACTIVE   48832423       4.57472222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
4935                          10191      19604 1234                     INACTIVE   48832423       4.51472222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
3679                          10175      40187 1234                     INACTIVE   20300027            10.71                                                                  JDBC Thin Client                                 SELECT SDAN8,SDQTYT,SDPPDJ,SDUORG,SDDCT,SDFRGD,SDDELN,SDPA8,SDADTM,SDTHRP,SDSRP2
4931                          19066        290 1234                     INACTIVE   16277598       6.58472222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
2181                           1311       2983 1234                     INACTIVE    7032445       41.3938889                                                                  JDBC Thin Client                                 SELECT  DISTINCT GLDOC,GLPOST,GLLT,GLDGJ,GLKCO,GLDCT,GLEXA,GLR1,GLRE,GLPN,GLICU,
9811                          15283      13699 1234                     INACTIVE    7032445       41.3938889                                                                  JDBC Thin Client                                 SELECT  DISTINCT GLDOC,GLPOST,GLLT,GLDGJ,GLKCO,GLDCT,GLEXA,GLR1,GLRE,GLPN,GLICU,
7281                          15258       2380 1234                     INACTIVE    3379248       37.2380556                                                                  JDBC Thin Client                                 SELECT SDAN8,SDQTYT,SDPPDJ,SDUORG,SDDCT,SDFRGD,SDDELN,SDPA8,SDADTM,SDTHRP,SDSRP2
27197                            41       1604 1234                     INACTIVE    2911686       27.3166667                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
13675                         16529        830 1234                     INACTIVE    1207700       48.6297222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
29908                          2602       9964 1234                     INACTIVE    1207700       48.6297222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G

Remember that
If the session STATUS is currently ACTIVE, then the value represents the elapsed time in seconds since the session has become active.
If the session STATUS is currently INACTIVE, then the value represents the elapsed time in seconds since the session has become inactive.

Handy to also look at longops to know how long they might take to complete, if they are listed there.
 
   select
   l.sid,
   l.sofar,
   l.totalwork,
   l.start_time,
   l.last_update_time,
   s.sql_text
from
   v$session_longops l      
left outer join
    v$sql s
on
   s.hash_value = l.sql_hash_value
and
   s.address = l.sql_address
and
   s.child_number = 0
   order by TOTALWORK desc;

The above is cool for seeing long operations, if you want to see active longops, add this where clause:
 
   select
   l.sid,
   l.sofar,
   l.totalwork,
   l.start_time,
   l.last_update_time,
   s.sql_text
from
   v$session_longops l      
left outer join
    v$sql s
on
   s.hash_value = l.sql_hash_value
and
   s.address = l.sql_address
and
   s.child_number = 0
where sofar < totalwork
   order by TOTALWORK desc;

full text from long ops - so you can do query plans.

select
   l.sid, 
   l.sofar, 
   l.totalwork, 
   l.start_time, 
   l.last_update_time, 
   t.sql_fulltext
from
   v$session_longops l       
left outer join
    v$sql s 
on 
   s.hash_value = l.sql_hash_value
and
   s.address = l.sql_address
and
   s.child_number = 0
left outer join  v$sqlarea t
on
   l.sql_hash_value = t.hash_value 
   order by TOTALWORK desc;


Then kill it, remember RDS on AWS does not let you run this – no matter who you are connected as:  Remember that if there are java processes in longops, basically you can kill them.  I would not touch runbatch longops -that is legitimate.  Quite often jdenet_k processes cannot really run long processes - you need to be a little bit careful here.

   alter system kill session '13993,34274'
Error report -
ORA-01031: insufficient privileges
01031. 00000 -  "insufficient privileges"
*Cause:    An attempt was made to perform a database operation without
           the necessary privileges.
*Action:   Ask your database administrator or designated security
           administrator to grant you the necessary privileges

You need to run their prooceure.

begin
    rdsadmin.rdsadmin_util.kill(
        sid    => 13993,
        serial => 34274);
end;

Batch analytics - Tracking UBE performance in JD Edwards

$
0
0
Tracking and comparing batch performance.

This is an update on my previous post https://shannonscncjdeblog.blogspot.com/2019/07/ube-performance-suite-with-dash-of.html on historical batch performance analysis.

The batch window can sometimes be an enigma wrapped in a riddle.  Mapping the F986110 to F986114 and lots of complex queries (like in my blog!!) to work out what is going on.

It’s hard when you might make changes to a certain UBE or to some central technology (batch server, database server) and you want to justify your existence.  You want to tell your managers how good you are and how many CPU cycles you’ve saved by implementing said changes?  Yeah, we all want that.

There are a couple of challenges with this approach, firstly that you delete your WSJ history and therefore probably do not see the full picture.  Perhaps a user even deletes their own history (I heard that this occurred once, a few years ago.  That user is now a consultant!  But also, really hard to summarise all of the jobs, with rows processed etc etc, and try to get some real results.

Then my team came up with ERP analytics for batch, where you can see all of this information -  no matter what you are doing with the history.

You can see that I’m using a fictional JDE company, but not a fictional company.  I try to keep my blog posts relevant on a number of levels.  www.watsacowie.com.au– get there!

Back to my technical blog.

This is fairly boring report.  You can see all of the UBE’s run for ever, all servers and all environments.  You can refine this with easy to use drop downs, that allow you to select a certain server or date range. 


Pretty cool, yeah, but look when you put a date range:


Then the magic happens


You can see that the report has automatically calculated the previous period dates (in this case another week further back) and has run a side by side compare for all of the UBE's.  This has compared about 180,000 records in seconds and gives me a point in time compare of this week over last week.

We can no see if any reports were processing WAY more data, or taking way longer or running more times that the previous week.  This is good information for understanding a change in performance and then using some of our other reports to work out where that performance problem is.

All these reports can be simply hosted on an E1page in JD Edwards, to that you can see your batch history all of the time and understand why things are performing like they are.

If you are interested in having this configured for your JD Edwards, then please reach out on linkedin or email me (you can work out my address, my first name "." my surname at fusion5.com.au - I'm trying to deter the bots)  we have a number of clients running with 1,000,000’s of historical batch records.  We automate the delivery of historical period comparisons so that they can know of any issues and act upon them.

Of course we are looking to augment this offering with two significant additions:

  1. AI over the top of job status, rows processed and execution time.  This is going to perform anomaly detection over those dimensions and email you when something has gone wrong.  This has a far deeper understanding of the success or failure of the job, as it’s not just D or E.
  2. Storing the actual PDF files also.  If you are ever worried about purging your WSJ – don’t.  We can carve off the jobs and the data and have this available to you in the EXACT same format as you see above.  A simple click of a link will allow you (if you have permission) to download the PDF.  This will keep your WSJ and PrintQueue directories (PDF database files) in check much better.  You also don’t need to worry about the auditors asking for reports.
  3. Store the logs and CSVs
  4. Make a portal for your customers to get their files!  


10 Tips for a JD Edwards upgrade

$
0
0

I’ve been involved with more than 100 JD Edwards upgrades, so this experience gives me the opportunity to highlight some lessons that I’ve learned along the way.  Some of these are super simple and some involve a little more planning and execution (and sometimes some external help).

This is going to be a multiple blog post article on what I see as 10 things that you can get right to help make your next JD Edwards project a success.

Tip 1: Establish a project scope and stick to it.

There are generally a lot of choices for an upgrade, and they are only getting tougher.  Do I change database, do I change platforms?  Should I move to the cloud, do I need a managed service?  Yes – lots of questions, but they really should not get in the way of the mechanics of what is being done.  You’ll still need to do the same amount of testing, and let’s be honest, this is generally the critical path on any upgrade project.

Form a clear idea of what you want to achieve for the project.  Scope the technical decisions early.  Don’t be afraid of a couple of architectural changes in addition to the upgrade itself - refer back to my observation on testing [critical path].
Figure 1: There are so many architectural options when considering an upgrade
Use the testing and the project to achieve some strategic goals.  Nearly every upgrade that I’m involved in has more than a single dimension.  More often than not, we are performing upgrades at the same time as cloud migrations.  As I alluded to, this is a good combination and a tight scope.  If I was to take on an upgrade and cloud migration, I’d try and not change much else in that step.

Changing too much can introduce too much risk. For instance, changing your output management software [createform] can be very onerous and take a lot of time.  If you think of all the time that you have spent on making your invoice print perfectly, every time…  Any change in this area is going to take a lot of time!  You can do that iteratively when you get an opportunity.  Output management solutions can be run in parallel and give you a perfect fallback position.

Be tight on scope and agree early – what is in and what is out.


TIP 2: UNDERSTAND AND CONTINUOUSLY MEASURE YOUR TESTING

$
0
0

Tip 2: Understand and continuously measure your testing

It’s critical to understand the scope of testing.  Whether you are doing test automation, test outsourcing or manual testing, you need to know the programs and versions that your users are currently using.  This is going to allow for accurate test scenarios.  Knowing your programs (both interactive and batch) will allow you to also choose candidates for automation (if that is what you are going to do).

A high level idea of your processes (whether documented or reverse engineered by clever e1 pages) will enable you to group your test cases and test resources.  Quite often a logical grouping of your testing can ensure that end to end processes are tested and that the test data is going through a full cycle too.

Get a good list of programs and how often they are used.  Get a good list of users from production and what programs they are running.  Use these two project assets to then choose a test team and also ensure test case saturation in your test environments.
Figure 2: Using the Fusion5 ERP Analytics suite, you can quickly see programs and how much their usage is, using a date range and environment.

At Fusion5 we advocate the use of ERP Analytics to give you “easy to use” reporting on all user activity in JD Edwards.  This software subscription can plug into your current production environment and your upgraded environment and provide constant feedback about test volumes and test case saturation – for both batch and interactive.

TIP 3: UNDERSTAND THE TOTAL IMPACTS AND COSTS OF CHANGE

$
0
0

Tip 3: Understand the total impacts and costs of change

Going from SQL Server to Oracle means that you will have case sensitive queries, when the users previously did not need to worry about cAsE.  This is unavoidable and you’ll need to educate the user community on this, so you’ll need to follow change management processes and include lots of comms.

Going from Windows to Linux might mean changes to processing options for the location of files or interoperability.  There are options for bulk Processing Option (PO) changes using the database.  This is very important for finding a “\” for example and turning that into a “/” or vice-versa.  This simple trick of interrogating the BLOB in the F983051 can change a very manual and error prone process to an exact science.

Media Object changes are critical and need to be understood as you upgrade JD Edwards.  There are more storage options that you need to be aware of, some for good and some for your own peril.  I say peril, the storage costs alone are similar (see below), but the database IOPS is something that you need to focus on VERY carefully for a cloud implementation – as this is what generally governs your service limits.

For example, let’s look at this from a pure cloud cost perspective.  In AWS 100GB of s3 is going to cost you 0.023 dollars per GB ($2.50 a month). EFS (Elastic File System), a highly available storage format perfect for media objects, is going to cost you about 0.3 dollars per GB per Month ($36 a month).  Let’s put that into your highly available database instance (multiple availability zone) and that is going to be $27.60 per month.  Remember that you don’t really backup or restore the EFS, so you are only paying for a single copy (you might snap it to S3).



Look at your end game infrastructure and make sure that you understand both the change management impacts and the cost impacts of the new architecture.




TIP 4: GET YOUR BROWSER READY

$
0
0

Tip 4: Get your browser ready

Your browser is basically equivalent to your operating system in terms of compliance and importance to JD Edwards.  You need to get it right.  There are a number of important compatibility settings, security settings, proxy exceptions (and more) that you need to ensure are pushed out to the business as part of an implementation. 

In general, URLs no longer change with an upgrade or migration of JD Edwards.  A production URL (jde.fusion5.com.au) for example, is probably well known and has all sorts of favourites saved on all sorts of machines.  Don’t change it – it’s painful.  You’ll have more calls into your helpdesk saying “JDE is broken” that you can poke a stick at.  Please keep the URL the same and you’ll have a better chance of everyone being able to login.

Ensure that you push out cache refreshes for any tools release change or any JDE change for that matter.  It’s critical to manage all of your browsers too, not just the ones that you think are being used.  How do you know what browsers you need to cater for?  Use ERP Analytics of course.  This gives you detailed mapping of users and programs to browsers.  It’ll also allow you to ensure that you are using activeX (please don’t keep relying on this) and what settings your CNC team need to put into the JAS.INI to ensure that all browsers are treated equally (well, as equally as possible).  Supporting a broad base of browsers and technologies is always going to be best.

Browser performance constantly surprises me, as the two screen grabs below will attest to.  We have two different clients that use JDE heavily, you’ll see opposite results in terms of the performance.  It really does make a HUGE difference though – 80% difference in site 1 and 30% difference in site 2 – purely based on browser choice.

Figure 4: Internet Explorer is significantly faster than Chrome, and it is clearly the browser of choice.

Figure 5: A different client sees Internet Explorer as the most popular, but 28% slower than Chrome


dbms_output and fflush

$
0
0
This is purely a reminder for me next time, getting the output from a begin block, vs. not.

It seems that a being block runs on the server without coming back to the client, this is nice for fast reasons, but crappy for feedback.  You can understand why too.  If the server is busy running all of your statements, then you are not going to get the output back.

And guess what, heaps of the cool statements can only be run on server code, which does make total sense too.  Oh, also there is no fflush with dbms_output - so my blog title is a little misleading. 

It's  an important fact to remember, all your directives with setting echo on, they are for the client.  So spool is a sqlplus command not a PL/SQL command – BOOM!  Mind blown!

This most might also help you purge your work centre data, as quite often [for some more than others] you might get a few too many records in this area.


set echo on
set feedback on
set timing on
SET SERVEROUTPUT ON FORMAT WORD_WRAPPED
spool truncateUselessWorkCentre.log
DELETE from TWEDTA.F00166 where GTOBNM='GT01131'
AND GTTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));
commit;
DELETE from TWEDTA.F00165 where GDOBNM='GT01131'
AND GDTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));
commit;
DELETE from  TWEDTA.F01131T where
ZCSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));
commit;
begin
   for a in 119200..119228 loop
        dbms_output.put_line('06 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000006 and zmdti = a;
        commit;
     dbms_output.put_line('07 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000007 and zmdti = a;
        commit;
     dbms_output.put_line('13 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000013 and zmdti = a;
        commit;
   end loop;
end;
/
DELETE from  TWEDTA.F01133 where
ZTSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));
commit;
DELETE from  TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013);
commit;
spool off;
quit;


gives the output:
SQL> DELETE from TWEDTA.F00166 where GTOBNM='GT01131'
  2  AND GTTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));

8155 rows deleted.

Elapsed: 00:00:02.75
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> DELETE from TWEDTA.F00165 where GDOBNM='GT01131'
  2  AND GDTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));

8155 rows deleted.

Elapsed: 00:00:04.48
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> DELETE from  TWEDTA.F01131T where
  2  ZCSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));

2330 rows deleted.

Elapsed: 00:00:00.36
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> begin
  2  for a in 119200..119228 loop
  3       dbms_output.put_line('06 about to process date' || to_char(a));
  4       DELETE from  TWEDTA.F01131M where ZMAN8 = 99000006 and zmdti = a;
  5       commit;
  6       dbms_output.put_line('07 about to process date' || to_char(a));
  7       DELETE from  TWEDTA.F01131M where ZMAN8 = 99000007 and zmdti = a;
  8       commit;
  9       dbms_output.put_line('13 about to process date' || to_char(a));
10       DELETE from  TWEDTA.F01131M where ZMAN8 = 99000013 and zmdti = a;
11       commit;
12  end loop;
13  end;
14  /

--but, this just comes at once, as there is no concept of fflush for the server, you need to wait for the code to return

06 about to process date119200                                                 
07 about to process date119200                                                 
13 about to process date119200                                                 
06 about to process date119201                                                 
07 about to process date119201                                                 
13 about to process date119201                                                 
06 about to process date119202                                                 
07 about to process date119202                                                 
13 about to process date119202                                                 
06 about to process date119203                                                 
07 about to process date119203                                                 
13 about to process date119203                                                  
06 about to process date119204                                                 
07 about to process date119204                                                 
13 about to process date119204                                                  
06 about to process date119205                                                 
07 about to process date119205                                                 
13 about to process date119205                                                  
06 about to process date119206                                                 
07 about to process date119206                                                 
13 about to process date119206                                                  
06 about to process date119207                                                 
07 about to process date119207                                                 
13 about to process date119207                                                 
06 about to process date119208                                                 
07 about to process date119208                                                 
13 about to process date119208                                                 
06 about to process date119209                                                 
07 about to process date119209                                                 
13 about to process date119209                                                 
06 about to process date119210                                                  
07 about to process date119210                                                 
13 about to process date119210                                                 
06 about to process date119211                                                 
07 about to process date119211                                                 
13 about to process date119211                                                 
06 about to process date119212                                                 
07 about to process date119212                                                 
13 about to process date119212                                                 
06 about to process date119213                                                 
07 about to process date119213                                                 
13 about to process date119213                                                  
06 about to process date119214                                                 
07 about to process date119214                                                 
13 about to process date119214                                                  
06 about to process date119215                                                 
07 about to process date119215                                                 
13 about to process date119215                                                  
06 about to process date119216                                                 
07 about to process date119216                                                 
13 about to process date119216                                                  
06 about to process date119217                                                 
07 about to process date119217                                                 
13 about to process date119217                                                 
06 about to process date119218                                                 
07 about to process date119218                                                 
13 about to process date119218                                                 
06 about to process date119219                                                 
07 about to process date119219                                                 
13 about to process date119219                                                 
06 about to process date119220                                                 
07 about to process date119220                                                 
13 about to process date119220                                                 
06 about to process date119221                                                 
07 about to process date119221                                                 
13 about to process date119221                                                 
06 about to process date119222                                                 
07 about to process date119222                                                 
13 about to process date119222                                                 
06 about to process date119223                                                  
07 about to process date119223                                                 
13 about to process date119223                                                 
06 about to process date119224                                                  
07 about to process date119224                                                 
13 about to process date119224                                                 
06 about to process date119225                                                  
07 about to process date119225                                                 
13 about to process date119225                                                 
06 about to process date119226                                                  
07 about to process date119226                                                 
13 about to process date119226                                                 
06 about to process date119227                                                 
07 about to process date119227                                                 
13 about to process date119227                                                 
06 about to process date119228                                                 
07 about to process date119228                                                 
13 about to process date119228                                                 

PL/SQL procedure successfully completed.

Elapsed: 00:08:47.17
SQL> DELETE from  TWEDTA.F01133 where
  2  ZTSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));

10484 rows deleted.

Elapsed: 00:00:00.91
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> DELETE from  TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013);

10484 rows deleted.

Elapsed: 00:00:02.07
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> spool off;

Finally, the script is improved to use dynamic spool file name and also relative dates.  Note also, that now I know what is server code and what is client code, I get some auditing done at the top and bottom of the script to improve the results.

set echo on
set feedback on
set timing on
SET SERVEROUTPUT ON FORMAT WORD_WRAPPED

col dt new_value dt
select to_char(sysdate,'YYYYMMDDHH24MISS') dt from dual;

spool truncateUselessWorkCentre_daily_&dt.log

select count(1), 'F00166' from twedta.f00166;
select count(1), 'F00165' from twedta.f00165;
select count(1), 'F01131T' from twedta.f01131T;
select count(1), 'F01131M' from twedta.f01131M;
select count(1), 'F01133' from twedta.f01133;
select count(1), 'F01131' from twedta.f01131;

DECLARE
fromdate number;
todate number;
begin
   select (to_char(sysdate-6, 'YYYYDDD')-1900000) into fromdate from dual;
   select (to_char(sysdate-3, 'YYYYDDD')-1900000) into todate from dual;
   for a in fromdate..todate loop
      DELETE from TWEDTA.F00166 where GTOBNM='GT01131'
      AND GTTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a);
      commit;
      DELETE from TWEDTA.F00165 where GDOBNM='GT01131'
      AND GDTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a);
      commit;
      DELETE from  TWEDTA.F01131T where
      ZCSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a);
      commit;
        dbms_output.put_line('06 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000006 and zmdti = a;
        commit;
      dbms_output.put_line('07 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000007 and zmdti = a;
        commit;
      dbms_output.put_line('13 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000013 and zmdti = a;
        commit;
      DELETE from  TWEDTA.F01133 where
      ZTSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a);
      commit;
      DELETE from  TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a;
      commit;
   end loop;
end;
/

select count(1), 'F00166' from twedta.f00166;
select count(1), 'F00165' from twedta.f00165;
select count(1), 'F01131T' from twedta.f01131T;
select count(1), 'F01131M' from twedta.f01131M;
select count(1), 'F01133' from twedta.f01133;
select count(1), 'F01131' from twedta.f01131;
spool off;
quit;

TIP 5: LOAD TEST YOUR BATCH ACTIVITY

$
0
0

TIP 5: LOAD TEST YOUR BATCH ACTIVITY


Batch activity is really a pure performance comparison which takes away a potential tier in your traditional 3 tier architecture using JD Edwards (Web, App & DB).  The nice thing about this also is that you are really only testing your batch server and your database server.

JD Edwards (in my mind) submits two types of jobs
  1. Type 1 runs a series of large SQL statements.  These are generally not complex, as the batch engine’s capacity to run complex statements (even simple aggregates) is not good.  Therefore you are going to get large open selects, which will generally then perform subsequent actions based upon each row that is returned in the main loop.  (eg. R09705 - Compare Account Balances to Transactions)
  2. Punchy UBE that gets in with some tight data selection, generally runs a pile of BSFNs and then jumps out again. (R42565 – Invoice Print)
It’s easy to categorise these jobs because of the amazing job Oracle did with “Execution Detail”, specifically rows processed.
Figure 6: View taken from "Execution Detail" row exit from Work with Submitted Jobs (WSJ)

You can actually databrowse this (V986114A) and see column Alias: PWPRCD, defined as “The number of rows processed by main driver section of the batch job”.  I use this in a lot of SQL around performance, as I can get rows per second for my UBEs – which is a great comparison device.  If you see consistent low numbers here, probably a punchy UBE – lots of rows, probably category 1.

Make sure that you test UBEs in both of the categories that I have listed above.  Some are going to test the database more, some are going to test the CPU on the batch server and some are going to test network I/O.  Make sure that you “tweak” your TCP/IP too, as I have seen this make some impressive differences in batch performance. (search Doc ID 1633930.1 and tweak).

The Fusion5 UBE Analytics suite allows you to do this comparison immediately and gives you some impressive power to compare periods, servers and more.
Figure 7: UBE Analytics summary screen - week on week performance comparison
We can choose a date range for compare and let the system do the rest.

You can see that we can tell for each UBE and version combination that has been run for this fictional client in the date range specified, if you compare it with the previous period – performance has slowed down in the top 12 rows.  I’d be looking at what has changed!
The UBE Analytics data is not stored in JD Edwards, so you never lose your history.

TIP 6: LOAD TEST YOUR INTERACTIVE ACTIVITY

$
0
0

Tip 6: Load test your interactive activity

You’ve got my full attention in this section, I really enjoy load testing.  Whether this is using OATS or other software, it’s a bit of a passion of mine.

Here is my recipe for success for interactive load testing.  You have your batch results above, so you are pretty confident with database size and hopefully application (logic layer).  We are now going to test the interactive performance of JD Edwards and how the user is going to experience things.

The first question you need to be honest about, is the peak capacity of users that you are going to test.  If Server Manager tells you 150 users are on, how many people would you load test with?  I can tell you – A LOT LESS!  I would test 40 in that scenario with a wait time of 5 – 8 seconds.  Let me show you why:

Figure 8: Standard ERP Analytics screen showing current activity, both throughput and location


My interactive report says there are 56 users logged into JDE and active in the last 5 minutes.  This is an interactive dashboard that Fusion5 ERP Analytics customers have access to.  You can also see the pages per minute and pages per second.  We are peaking at about 150 a minute in that snapshot, but I can find the peaks over the last 2 months if needed.
Figure 9: Server Manager’s view of the connected world is generally artificially high
Yet Server Manager is trying to tell me that I have 288 users.
Even with my classic double up the AIS users – we have 144 logged in, but only 58 active in the last minute.

What I’m trying to say here is don’t stress your system with too many users. Tuning for 3 x the worst scenario possible is actually going to slow you down.
Figure 10: ERP Analytics screen showing time of day, page views and performance. Reiterating the fact that a busy server is a fast server!

The graph above is unequivocal in showing that performance is better when pages are busy.  Do not have too many idle web servers because you have catered for 3x the users – your users are actually going to experience worse performance.  This is MORE dramatic at the users drop off.  I see around 20% performance improvement when a JAS server is loaded and cached up nicely.

Now that you can determine the number of users you need for load testing, you can execute this with the software or services that you have access to.  At Fusion5 we use OATS and can assist with any load testing you need.  We also validate and continually measure interactive performance using ERP Analytics, which can produce all of the graphs that you see above.

Anecdotally, good performance from JD Edwards is when pages load in about 1.1 seconds. 
Figure 11: Another view of performance over time, but separating download time and server response time. The page load is generally in direct correlation to server response time.

We measure and record exactly what the end user experiences.  We can also report on the network traverse time and the server response time.  These are all critical values when determining what you need to fix.  We can run this reporting on different users or geographies too, so you can compare performance in a single city or around the world.

TIP 7: TEST EVERYTHING WHEN YOU ARE DOING “MOCK” GO-LIVES

$
0
0

Tip 7: test everything when you are doing "mock" go-lives

As I said, if I'm at a go-live, it's not my first rodeo.  Sure, I'll always have a nervous demeanour and perhaps a bit of sick feeling in my stomach, but I do love it.  I like seeing all of the data and statistics around me that somewhat affirm the planning and efforts that have gone into the project.  It's simple, when things go to plan.

Of course when a user does an open find over the F4211 and F42119 using a single non-indexed field and then wants to go to the end of the grid… with probably 20 million rows to be displayed…  I might not have tested that (nor catered for it in the sizing).  Oh, and when it doesn't return and they do it another 10 times (to be sure), that also was not in our test plan.  Nonetheless – there will always be challenges and things unexpected – your job is to reduce the number of them.

Mock go-lives are critical.  They do the following important tasks:

Assign responsibilities to all tasks, both prior to, during and after the upgrade.  Ensure that these are on the run sheet and have all been tested before.
Version your run sheet, make sure all line items are filled out and ensure that there are accurate timings.  You will not get good timings on the first conversion, and perhaps not the second.  Subsequent to that you should be building out exactly how long the conversion is going to take so that you can determine if you need to "look outside the square" when it comes to outage windows.
Make sure that people run integrity reports and check the results every time.  I've been involved in go-lives where an integrity did not match on the go-live weekend – but guess what?  It never balanced in the last 5 mock go-lives – it was not compared.  Getting everyone to run every step is a big lesson.
I only really care about rowcounts, but I know that the business will want integrity reports – so you might want a few.  Summing amount columns or hashing is another way to make technical people really happy.
Ensure that you move some WSJ history too.  Nothing worse than a user logging in and not seeing the report they ran on the Friday before go-live weekend.  Anything you can do to reduce the call volume on the Monday after go-live – you should do it!
Timing, if things are too fast you probably have a problem.  If things are too slow, you probably have a problem.  Make sure that things are predictable. 
Sleep is important, people do not make good decisions under lots of pressure and with a lack of sleep.  Go-lives are tough and should be, but not at the expense of the team.  Don't let the team drive if they've worked 20 hours, get a local hotel and an UBER.  Plenty of food and drinks for the project team too.

Get a runsheet, live by the runsheet and succeed with the runsheet.  Regular comms are critical – good luck!

TIP 8: SECURITY

$
0
0

Tip 8: Security

Security sometimes takes a back seat, but please, please – don’t let it.  Without exaggeration it’s about 1,000,000 times easier to make security changes before a go-live than after.

Simple things need to be completed and tested before go-live:
  • Complete production security model in place, including row security
  • All table auditing enabled (if you do this)
  • Complex JDE password for database and for JDE
  • Do not use the JDE account for the system account for your users.  Please do not do this, create “jdeuser” or something much better that will not get locked out
  • Check that the password expiry policy for Oracle is not default, or else your system accounts will start locking out
  • Change your default node configuration, do NOT leave this standard.  This is one of the largest security holes in the suite.
  • LDAP or SSO is critical.  Controlling user access is easiest if someone else is doing it (I find).  So if the desktop team is decommissioning users (oh and changing passwords) this is a big bonus and will save time and money.   The Fusion5 SSO offering is really cool too, especially if you want to use Azure to MFA people under certain criteria – all done by someone else!
  • Make sure that your data is encrypted at rest and in transit
  • Get your security groups tight and your firewalls enabled
  • Default access should be no access
  • Adopt the most stringent security posture your business can afford
Here is an interesting tip, quite often row security can be good for performance.  Why?  Because it ensures that there is a where clause that is generally an indexed field.  If you are row securing MCU or CO, then the where clause is enforcing less IO and hopefully a quicker result!

TIP 9: MONITOR RELENTLESSLY

$
0
0

Tip 9: Monitor relentlessly

Tip 9 and 10 are closely related, but this tip is all about feedback.  If you know things are going well it’s great.  If you know things are going poorly, that is great too – because you can fix it.  The worst case scenario is that things are going pear-shaped in the background and you only hear about it when a user raises a case.  You need to be KPI’d on finding errors before your users – period.

How can you find errors before your users?  Here are a couple of tricks that Fusion5 implements for our clients:
ERP Analytics
Sometimes referred to the black box for your ERP, we use this to monitor performance and usage of JD Edwards. It records every page load, by every user – every minute of every day.  This information is incredibly powerful for benchmarking and comparing ANY change in your entire enterprise.
UBE Analytics
Having access to the runtime, rows processed, user and server information for every batch job allows us to continually monitor the critical two tiers of JD Edwards.  Reading this with ERP Analytics gives more information on where performance problems might be and another point of data to compare with.
Log monitoring
Fusion5 has a very advanced cloud formation in AWS which utilises cloudwatch to monitor all log files, UBEs and JD Edwards connections.  This enables us to graph and monitor user connections, concurrent UBEs and search ANY logfile for ANY error – ever.  This is a single console for all logs across JD Edwards.  This approach and consistency can be used with many different installation types, not just limited to AWS.
DB growth monitoring
Keeping an eye on table by table database growth is critical for understanding if a process has gone rogue.  It’s also critical for maintaining consistent database performance.  Regular rowcount reporting and size reporting will ensure that you can deliver a service level to your users that is acceptable.  Maintenance of your data size is important for costs and restoration times.

Figure 12: Sample custom dashboard showing metrics that are 100% relevant for JD Edwards

Figure 13: AWS log insights provides intelligence that would previously be impossible to find.  This shows a graphical representation of errors and type of errors over 8 separate web servers.


TIP 10: CONTINUOUS IMPROVEMENT

$
0
0

Tip 10: Continuous improvement

It’s a theme that is not going to go away.  If you have implemented all of the shiny new toys from JD Edwards, then you need to show ROI. 

This is a theme that we are going to hear a lot in the coming years.  Even the way Oracle is releasing JD Edwards functionality follows the ideals of continuous delivery by committing to release 9.2 until 2030.  We are getting improvements continuously, not in major releases. 

I think there are 3 ways that you can make a difference to your business in relation to adopting continuous improvements with JD Edwards.

Finding and implementing RPA (Robotic Process Automation) Opportunities

There is so much opportunity here, and all of the tools are at your fingertips.  You can use ERP Analytics to find processes (applications / sequences) that are run frequently.  Use this data to go back to the business and observe what the end users are doing.  For instance, if you see that P42101 is being run 2,000 times a day – look for opportunities to improve this process.  This could be EDI, this could be spreadsheet macros that call an orchestration.  What’s an orchestration I hear you ask? 

Orchestration is the ability to turn any piece of JD Edwards functionality into an API.  An API that is easy to call and can be authenticated with the user’s username and password.  So, exposing functionality to an Excel Macro – would be very easy.  You could write an orchestration to enter a sales order (in my example) and then a smart macro to call the orchestration with the data on the spreadsheet.  It could prompt for a username and password.  If your users are being sent orders in spreadsheets – you may have just increased their productivity and reduced a significant amount of human errors.

RPA implementation can be for simple or complex processes.  Look for repetition and eliminate it, as an ex-programmer – if you see repetition in code – there are inefficiencies in that code.  ERP Analytics will then allow you to measure the success of your RPA, as the usage of the applications should go down with good RPA implementation.

Orchestration is free to implement and can make a huge difference to mundane tasks.

Continually optimise your architecture

This may be more relevant for public cloud implementations, but let’s be honest – most are public cloud implementations.  You must continually drive for reduced hosting costs for all of the JD Edwards assets.  Quite often this is difficult, unless you have architected your solution for the cloud, turned the monolithic JD Edwards into an elastic cloud tenant.  This can be done.
Fusion5 has created what we think is a world first elastic JD Edwards cloud formation for AWS.  This architecture has the ability to expand and contract with load and demand.  We are able to define the rules to create new web servers and new batch servers and then retire them when they are not needed.  This allows our clients to have a very efficient architecture and if they feel that they are paying too much, we can reduce the size and number of machines accordingly.

Figure 14: Choosing between re-platforming and rehosting can be difficult, a re-platform is going to provide payback over time

A good illustration of the options you have available to you when you migrate is above, a lift and shift [rehost] is a simple project – but will not allow you to get true cloud benefits from native constructs (cheaper storage, elasticity or additional security).  If you do a re-platform (as I recommend) you can reshape JD Edwards to be a much more flexible cloud tenant.
If you did a rehost, I’d guess you might implement about 8 cloud constructs (EC2, EBS, ALB, multiple AZ, EFS (if you are lucky), whereas if you were re-platforming, you might use (RDS, EC2, EFS, ALB, ASG, CloudWatch, step functions, route53, S3, Launch Templates, Target Groups and more!)
It is much easier to get savings out of a re-platformed architecture.
At a number of sites I’ve seen savings of more than 50% month on month when we work hard at cloud cost reduction.

Continue to update JD Edwards

Patches for JD Edwards are now continuous, so your adoption should also be continuous.  I recommend making a plan, with a schedule of when you are going to take patches, when you are going to test patches and when you are going to put them into prod.  Start simple, choose twice a year and then work backwards for how long you are going to test, how long for retrofit etc. 
If you’ve been lucky enough to re-platform (as above) then you are going to have some distinct advantages when it comes to deployment.  That is that changes can be deployed and tested much more rapidly and actually, continuously.  If you have a flexible cloud implementation you could build and deploy an alternate package for production and ease this out into the user community.  Our AWS cloud formation allows us to deploy packages without outages, we can do this on a schedule and therefore allow environments to consume change at their own pace.  If there is an issue, we can back it out immediately and fix it.
Figure 15:  Sample continuous deployment schedule, simplicity is important.
A flexible architecture allows you to be more aggressive with your consumption of change and keep more “up to date” with the latest code from Oracle.




blocking / locking... potato... potato

$
0
0

Blocking is a funny problem and quite often one of the last things that I look for when there are JD Edwards issues.

We've had some recent problems with some serious blocking, but the reconciliation between client and server has made the log analysis almost impossible...  What I mean is that there are often client errors with no server errors.  The web server will give up, nothing in the enterprise server logs...  Until there are IPC errors because queues are too large [too many BSFNs running in queue]  But there are lessons in this.

First, we could see that there was an increase in the number of instances of COSE#1000 
ERP analytics shows us all historical details, wow that is a lot of problems:


Detailed AWS cloudwatch insights allow us unparalleled capability to query and audit all of our logs:

Can u see what is actually being done here, is it SOOOO amazing.   Looking at all logs in the last X[hours|days|minutes] in all log files for 14 servers, enterprise and web…  Looking for the relevant client [web server] and server [app server] logs that relate to and BSFN errors between the two.  Then showing this as a timeline.

I think that the big lesson here is adopting a consolidated approach to your logging, like a SPLUNK type approach.  If you adopt a some consistency, then all of the advantages of monitoring and better global interrogation are opened up to you.

What we have actually done in this instance is use cloudwatch to ingest all of the JD Edwards log files.  We are consolidating server manager, weblogic, enterprise server, system out and  /var/log into our own log streams that we can query.

Any example of a fairly complex query is below

**CloudWatch Logs Insights** 
region: ap-southeast-2 
#Here are the log files that I want to query
log-group-names: jde_wlsServerJASLogsout, jde_wlsServerJASLogs, jde_webServerLogs, jde_entServerLogs  
#looking at the last hour
start-time: -3600s 
end-time: 0s 
query-string:
```
#fields I want to look at and display on the console
fields @timestamp  ,@message,  @logStream, @log, BSFN
#search string in the logs - this is a simple example that will only match web server logs
|filter @message like 'COSE#1000'
#Allows me to create my own fields from the output and summarise based upon these with a little bit of regex magic
|parse @message /(?<date>\d{2}\s+\S{3}\s+\d{4})\s+(?
| sort @timestamp asc
| limit 10000



#And the results:

```
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|       @timestamp        |                                                                                                                 @message                                                                                                                  |                @logStream                 |              @log              |        BSFN         |    date     |     time     | errorlevel | module  |   user    |   Env    |
|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------|--------------------------------|---------------------|-------------|--------------|------------|---------|-----------|----------|
| 2019-08-28 06:15:19.809 | 28 Aug 2019 07:15:19,573 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@3f09721b: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user:USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:15:19,573 | WARN       | RUNTIME | AMRE001 | JPD920UK |
| 2019-08-28 06:24:31.258 | 28 Aug 2019 07:24:31,092 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@2c3526cf: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:24:31,092 | WARN       | RUNTIME | AMI001 | JPD920UK |
| 2019-08-28 06:34:21.978 | 28 Aug 2019 07:34:21,802 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@74a86b0a: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:34:21,802 | WARN       | RUNTIME | AME001 | JPD920UK |
| 2019-08-28 06:42:52.420 | 28 Aug 2019 07:42:52,371 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@12ddb7bb: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:42:52,371 | WARN       | RUNTIME | AE001 | JPD920UK |
| 2019-08-28 06:45:25.972 | 28 Aug 2019 07:45:25,747 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@256577d3: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6024) SocID:37846 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:45:25,747 | WARN       | RUNTIME | AM001 | JPD920UK |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 Entries like the above can mean many things, network problems, security problems...  But when you look at the server logs and see nothing...  Start to think about blocking.  Also look at the BSFN summary page for the web server in question, this will show you the BSFN runtimes - max, min and average.  If this shows the functions are generally very fast - then you know that you might have some locking problems.

Now, validate this at the database.  These statements will only work while there is locking / blocking - so have them ready.  These are also oracle statements.

At the database:

Show me the blocking:

select
   (select username from v$session where sid=a.sid) blocker,   a.sid,   ' is blocking ',   (select username from v$session where sid=b.sid) blockee,   b.sidfrom
   v$lock a,
   v$lock bwhere    a.block = 1and    b.request > 0 and    a.id1 = b.id1 and    a.id2 = b.id2;

What are they doing / blocking / locking?

Select    c.owner,   c.object_name,   c.object_type,   b.sid,   b.serial#,   b.status,   b.osuser,   b.machinefrom    v$locked_object a ,   v$session b,   dba_objects cwhere   b.sid = a.session_idand  a.object_id = c.object_idand b.sid = 1340;

All in one – with JDE information - machine process type and statement

Select    c.owner,   c.object_name, c.object_type,   b.sid,   b.serial#,   b.status,   b.osuser,   b.machine, b.process, b.program, b.sql_id, REPLACE(d.SQL_TEXT,CHR(10),'') STMTfrom    v$locked_object a ,   v$session b,   dba_objects c, v$sqltext dwhere   b.sid = a.session_idand  a.object_id = c.object_idand d.address = b.sql_addressand b.sql_hash_value = d.hash_valueand b.sid in(select
   a.sidfrom
   v$lock a,
   v$lock bwhere    a.block = 1and    b.request > 0 and    a.id1 = b.id1 and    a.id2 = b.id2);

If you see a long list of results (or a single line), you might want to consider your next actions.  Blocking can be pretty normal, but can also be a result of a deadlock (surely that should be a movie title).   A dead lock occurs when two sessions are fighting for the same lock.  Interestingly I've seen locks last for hours without being deadlocks.  Another very interesting problem that does occur is that a client (JDBC) process can lock be locked by a server process (jdenet_k).  I've seen this quite a bit and even programmed enough bad code to create it myself - ha!

Things to remember is that if a jdenet_k process is blocking for > 5 minutes, there is probably something wrong (the client will have given up - timeout's probably 90 seconds)...  So the BSFN is still running.  The actual human that tried the first time is probably trying again in another JAS session...  Things could be escalating...  Generally I think you can kill those sessions.  UBE's are a little different, leave them alone.

If you see java doing the locking.  You need to make the call.  Try and get back to the JDE session that is doing the damage and see what the user is/was doing [they'll generally deny it].  It can be little things that cause big locks.


UBE Analytics - short demo - understand JDE batch performance better

$
0
0
I'm trying to do a few more short video's on how UBE analytics works and what you can do with the reporting and dash-boarding.  Here is the first of a number.



Let me know what you think.

As you are probably aware, UBE analytics is a service that Fusion5 have created.  You are able to subscribe to this service, and then extract insights from your UBE processing data.

This is really handy for comparing days or weeks of processing.  Comparing regions or actual package deployments.  Super simple.

None of the history needs to be kept in JD Edwards, it can all be moved safely and securely to the cloud.

We provide an agent that runs on premise and copies the data to the cloud.  You schedule this as often as you need - our dashboards look after the rest.


Creating custom metrics just got a whole lot easier

$
0
0
The world is full of data and extracting insights from this is always a challenge.

Patterns in data can assist us predict the future, there is no doubt in that.  If you can determine a predictor for poor sales or poor performance, then this might enable to you be proactive the next time things occur.  This is fairly cryptic, but what if I could tell you that the sales order entry screens were running less over the last 2 weeks and that the average lines processed by R42565 (invoice print) was also down over the last couple of weeks.  Well, this is a good indicator that sales are going to be down too – but what if user behaviour was a lead indicator.  What if you could see that activity was down and talk to your staff about why this is occurring.  The same insights could be make in all modules in JD Edwards.   Everyone is looking at the transactional data – I’m looking at the user behaviour.

At fusion5 we created ERP Analytics about 10 years ago, giving our clients some really great insights into their user behaviours.  We’ve augmented this recently with UBE analytics, which allows you to see exactly what is going on in your batch job activities.  You can see tows processed and runtime, critical for evaluating performance.

Now, the combination of these two tools can allow you to create the most insightful and simple reporting tools around your ERP.  You can create reports on engagement time, trend data about performance or nice and easy to read gauges that all of you users can consume in e1Pages!



As you can see from the above, I have defined these custom controls in data studio to report on very distinct values.  I’ve defined the graphs to have custom ranges, this is really easy in data studio


I can set colours [colors for my American readers], maximums and minimums for any piece of that I have.  I can also filter the data.

In this instance, I can look at any data available from batch or interactive JDE usage.

Things that you can put onto any report or graph:
  • How long a user spent on a screen (name the screen, name the user if you want – or group of users)
  • How many rows a UBE processed
  • How often a UBE is run
  • How long a UBE took -  and compare months, weeks or days
  • How many times a version of a form has been loaded
  • How many pages loaded a day
  • Average server response time for loading forms in certain system codes – or all of them


Above is a list of the fields that are available for evaluating batch


Just some of the fields available for interactive

You get the picture, really easy to select the metric, define some ranges and GO!


Here we can see that I’m looking at the average runtime for UBE’s over the last week and have defined the ranges that are appropriate for this client.  I could further refine this for UBE’s that I’m interested in, like invoice print, or sales update.


Here, you can see your report in JDE using e1pages

Those colours are terrible – employ the classic JDE blue - #1e4a6dff


Or specific information in JDE itself…


JDE call an orchestration from excel - RPA it yourself

$
0
0
Imagine you go to your boss and explain: 

"I just made myself redundant.  I created a pile of orchestrations and macro's in spreadsheets that does all of my mundane tasks and now I have 20 hours a week free.  and guess what, I'm not making ANY mistakes!"

You'd get promoted right?  RIGHT!  I'd promote you.

Do you ever get a spreadsheet from someone and they say “just punch all of this into JDE”.  Are you like me and when you need to do something more than 6 times (I think that this is my repetition threshold) you have a burning desire for automation?  Actually, you cannot physically bring yourself to doing the task because you know you can make it easier and more rewarding...

Well, this post might help you!

Here is everything you need to create a spreadsheet that can call JDE functionality via orchestration.

First I create an orchestration that takes the following input

{
  "orch_input_TimeOfReading" : "163403",
  "orch_input_dateMeasurement" : "10/01/2019",
  "orch_input_remarkForReading" : "Latest Temp",
  "orch_input_Temperature" : "36.25",
  "P1204_Version" : "",
  "orch_input_szAssetNumber" : "1007"
}

I'm not going to cover off how to create an orchestration, there is a lot of content out there.  It's easy and "point and clicky" and aimed at the functional person.  Hey, us techos are not going to be needed soon. 

The orchestration client screen look like this - I do some testing to ensure that it's recording the data in JDE in the correct place.

Nice,  it's working.

So then I use postman to do some independent testing - get the exact syntax.  Know the headers I need to set, get my auth correct... 


Wow, postman is too cool – what about this for the docs:


It is amazing!!!

Back to excel:


My sheet looks like this, I have a single activeX button and a single field (for the obfuscated password).  Wow!

My code is like this:

This is super simple and readable, this is why I did it.  Also I'm no expert at vbscript'ing - so... This is the results of 1 hour of google and some testing.

Private Sub CommandButton1_Click()
 
  Sheet1.Cells(11, 4).Value = "Processing"
  Sheet1.Cells(11, 5).Value = 0
  CallJDEOrchestration
 
End Sub

Sub CallJDEOrchestration()

  Dim URL As String
  Dim JSONString As String
  Dim objHTTP As New WinHttpRequest
  Dim stringme As String
 
  stringme = "A"& "B"& Sheet1.Cells(10, 2).Value
 
  Dim Username As String
  Dim password As String
  Dim auth As String
 
  
  Username = Sheet1.Cells(1, 2).Value
  password = passwordTxtBox.Value
 
  auth = EncodeBase64(Username & ":"& password)
 
  
  'MsgBox auth, vbCritical, "Hello World"
 
  URL = "https://f5dv.mye1.com/jderest/orchestrator/orch_AddTempReadingForAsset"
  objHTTP.Open "POST", URL, False
  objHTTP.SetRequestHeader "Authorization", "Basic "& auth
  objHTTP.SetRequestHeader "Content-Type", "application/json"
  JSONString = "{""orch_input_TimeOfReading"" : """& Sheet1.Cells(10, 2).Value & _
  """,""orch_input_dateMeasurement"" : """& Sheet1.Cells(9, 2).Value & _
  """,""orch_input_remarkForReading"" : """& Sheet1.Cells(7, 2).Value & _
  """,""orch_input_Temperature"" : """& Sheet1.Cells(8, 2).Value & _
  """,""P1204_Version"" : ""ZJDE0001"",""orch_input_szAssetNumber"" : """& Sheet1.Cells(6, 2).Value & _
  """}"
 
  objHTTP.Send JSONString
  Sheet1.Cells(11, 4).Value = objHTTP.ResponseText
  Sheet1.Cells(11, 5).Value = objHTTP.Status
 
End Sub

Function EncodeBase64(text As String) As String
  Dim arrData() As Byte
  arrData = StrConv(text, vbFromUnicode)

  Dim objXML As MSXML2.DOMDocument
  Dim objNode As MSXML2.IXMLDOMElement

  Set objXML = New MSXML2.DOMDocument
  Set objNode = objXML.createElement("b64")

  objNode.DataType = "bin.base64"
  objNode.nodeTypedValue = arrData
  EncodeBase64 = objNode.text

  Set objNode = Nothing
  Set objXML = Nothing
End Function



You will need to ensure that your project has the following add-ins enabled (tools -> references):


You should then be able to change your URL and username and password (note that the field for the password is called passwordTxtBox)

This is using basicAuth, so that needs to be enabled on the AIS server if it’s going to work

You can find a copy here – if you want to rip it apart:


 You could do some pretty amazing and complex processing in JDE directly from excel.  And... you won't have the haters saying "but you need to do that in JDE", because you actually did.

Enjoy.


How far away from code current are you?

$
0
0
How is everyone doing keeping “code current”?

Code currency is an arduous task, but it really does not need to be.  We need to transfer the power from the dark arts of CNC to the blissfully easy world of BI reporting – surely!  Is that not the panacea for all problems.  No, I agree - it's not going to be that easy.

I’ve been working on a suite that can tell you everything you need to know about being code current, including items like:

  • How modified an object is – to help you gauge retrofit
  • How much you use the object (in terms of users, engagement time and also page loads)
  • Is the code the same as current ESU level from oracle?

The above points alone would allow you to look at all of your code, and then determine how far from pristine you are.  It would allow you to then look at usage and modification information and then choose how much effort you need to put into getting yourself code current.

As soon as you make changes and promote to prod, all of the reports can be updated to reflect this…


For example, the report of gold is a “honey pot” for determining code currency

This shows you all JDE applications being used and how much they are being used.
This report is comparing the code at this site with the latest oracle patches (yes true, this is what is being done) and can tell you if the code is a binary equivalent or not to the latest ESU's from oracle.
It’s only listing the programs that are not the same as the latest pristine
Clients can see what objects are being used and they can also see how much screen time and how many users are using the objects (for the specified date range)
All these data points allow clients to make better decisions on what modifications to keep.  You can quickly see those that are not use by many people and have many OMW actions – mods that are not used… Get rid of them
You can quickly slice and dice by system code and know what needs to be tested when you get code current.

One of the nice things is that we keep a copy of pristine updates with all ESU's and then we generate hashes (like a unique signature of the code) in a cloud based database.  We have code that will enable you to create a signature of your code and viola - we can tell if you are the same as PS920 with ALL the ESUs.



You can see from the above that we can have a very similar view for reports.
We can see how often the reports are run and only see those that have changed in the latest code from oracle.
This allows us to see the reports that we need to retro

I think that with a dashboard like the above (note that you can actually compare all object types

Viewing all 542 articles
Browse latest View live