Quantcast
Channel: Shannon's JD Edwards CNC Blog
Viewing all 541 articles
Browse latest View live

iASP, JDE, DBDR

$
0
0

This is going to get limited interest I know.

Wow, I’m finding that AS/400’s are getting more and more painful for supporting with JD Edwards.  An idea one of my learned colleagues just had was that we should charge AS/400 clients an additional 50% on their rates for the frustration and premature aging we get from managing their kit!  Ha…  Sorry Nigel.

For instance, how about this.

image

Simple datasource definition using JD Edwards.

I want to do a quick test using another AS/400 for production.  You might ask why?  I’ve build my PD920 environment early (before go-live weekend).  I’ve merged UDO’s, done everything except for data and control table merges.  So, I’m going to test the entire environment with a full set of converted data and control tables to ensure things are peachy!  This is great to have done before the go-live weekend.

I quickly change the Machine Name above and the library and think I’m going to rely on DBDR to do the rest.  Simple! 

restart services, most things work (I say most) – UBE’s don’t.

UBE logs have

1267374           Fri Mar 17 09:56:09.280520      dbdrv_log.c196

            OS400QL001 - ConnectDB:Unable to connect to DS 'Business Data - PROD' in DB 'Business Data - PROD' on Server DB 'CHIJTD41' with RDB Name 'CHIJPD61' via 'T' with Commitment 'N'. QCPFMSG   *LIBL      - CPFB752 - Internal error in &2 API

1267374           Fri Mar 17 09:56:09.280888      dbdrv_log.c196

            OS400RV007x - DBInitConnection:PerformConnection failed

1267374           Fri Mar 17 09:56:09.280968      jdb_drvm.c794

            JDB9900164 - Failed to connect to Business Data - PROD

1267374           Fri Mar 17 09:56:09.281024      jtp_cm.c282

            JDB9909003 - Could not init connect.

Oh man!!!  AS/400 – why do you do this to me???

Then another learned colleague tells me, “did you use JDE to change the data source information?  You can’t do that you need to use SQL and change the OMLL field in F98611 to reference the correct iASP for the query….”  HUH? Oh – of course!  Why didn’t I think of that.

select * from svm920.f98611;
select omdatp, omll from svm920.f98611;
update svm920.f98611 set omll = 'CHIJTD41' where omdatp in ('Business Data - PDT', 'Business Data - PROD', 'Control Tables - PDT', 'Control Tables - Prod');

Great, another restart of services and we are running again, and now UBE’s are processing.


Cache debugging–JDBj Service cache vs. Database Cache

$
0
0

We all love cache, as we think that it makes things quicker.  Cache is the concept of moving things from the single source of truth in order to improve performance.  Cache has an inherent problem of distancing itself from the single source of truth and therefore causing locking and concurrency issues.

JD Edwards has a number of difference caches.  You need to do lots of digging to find out what cache you are dealing with.  Data for the main part has two caches.  JDBj service cache and database cache (kernels).

JDBj cache, this is easy.  Available to clear in SM.  If java code is running, it will look here for cached values.

The JDBj service cache is the important one, this is where the data is.  Refer to the table below to see what tables are included in the JDBj Service Cache.

image

The service cache (JDBj)


TableTable Name/DescriptionOthers
F0004User Defined Code Types& Database Cache
F0005User Defined Codes& Database Cache
F0005DUser Defined Codes - Alternative LanguageOnly Service Cache (Not Database Cache)
F0010Company Constants& Database Cache
F0013Currency Codes& Database Cache
F0025Ledger Type Master File& Database Cache
F0092Library List - UserOnly Service Cache (Not Database Cache)
F00941Environment Detail - OneWorldOnly Service Cache (Not Database Cache)
F0111Address Book - Who's WhoOnly Service Cache (Not Database Cache)
F9500001CFR Configuration TableOnly Service Cache (Not Database Cache)
F95921Role Relationship TableOnly Service Cache (Not Database Cache)
F9861Object Librarian - Status DetailOnly Service Cache (Not Database Cache)

Wouldn’t it be nice to be able to drill down into this cache and see the values?  Well, nice for a nerd like me…  Perhaps not everyone wants to see this.

Database Cache

This is the kernel cache.  This seems to be in shared memory for all kernels of a logic data source, so all kernels refer to the same values in cache.  If these are updated, then everyone on the server gets to enjoy this.  Note that UBE’s create their own cache at the time of initialisation.

P98613 (Work With Database Caching) application will list all tables cached within your own environment because the tables defined can vary between EnterpriseOne versions and Tools Releases. Below is a basic list of tables where data are cached.

Note that P986116D can also help you clear a table at a time, but this is kernel cache (BSFN cache).  This is not going to affect JAS!

Clear all database cache– P986116D – advanced

image

See the tables being cached

image

Clear a table at a time

image

run P986116D W986116DA in fast path

image

Choose the table that you want to clear the cache for.

 

The Dilemma

There are many scenarios that the cache clears need to be coordinated.  For example, if you are in P0010 and you change the period of a company, he new functionality in 9.2.X will do the kernel cache clear – but guess what, it does not do the JDBj cache.

So…  When you go to enter a journal, you get a period error – because the JDBj service cache does not refresh automatically.  There is a bug currently due for fix in 9.2.1.1 Bug 24929695 : ISSUE WITH JDB_CLEARTABLECACHE which seems to indicate that there is going to be a link from JD Edwards runtime back to SM to enable the JDBj service cache to be cleared from JDE.  This would be nice.  It seems that the called clearTableCacheByEnvironmentMessage, but when I search my 9.2.1.0 source code and system/include – I don’t see any references to this.

I’m guessing that there is going to be a service entry or perhaps a PO that will define the SM URL and port so that automated cache clear can be triggered.  They (oracle) might also use the AIS / rest based interface to SM to enable this functionality.

 

 

JD Edwards go-live–I want to see my old Jobs

$
0
0

This is a common problem and one that you really need to prepare for.  Offer the service up and your users are going to love it.  As it happens when you “go-live” you generally change your SVM datasource, which means the F986110 and F986114 etc are going to be missing their old records.  For example, I’ve just done a 910 to 920 go-live, now my users in 920 cannot see their old PDF’s and csv’s.

Right, so what can I do.

A quick squizz at the old printqueue (/E910SYS/PRINTQUEUE) (yeah, IFS = AS/400) tells me that there are only 750000 PDFs…  What! plus the logs and the csv’s and more.  No way, about 1000000 files.  No wonder it was hard to browse the IFS.

Okay, I don’t want to move the million files, selective mv’s get’s errors in STRQSH, man that is the crappiest interface EVER!!!  It’s like the worlds worst beta ever. 

image

Sure I like being able to run unix commands on the green screen – but that interface!!!  wow, crap.  Does anyone know if I can ssh to the box without a 5250?

honestly, it’s sooooo terrible.

you never know if a command has hung or whether it’s complete.

Anyway, stop complaining…

So, there are too many files to run any commands – cannot use find with exec, but the interface is so bad, it makes everything worse.

> ls /E910SYS/*                                          
  qsh: 001-0085 Too many arguments specified on command. 
         0                                               
  $
                                                      

I get the above no matter what I try (after 5 minutes of course)

I finally decide to go rouge on this problem.

I run the following SQL:

--this is slightly unrelated, but I copy over all of the records from svm910 so that I can see them in WSJ for retrieval.  Nice, now I just need to put the files where JDE expects.  Note that I’m still using the filesystem for my PrintQueue.

insert into svm920.f986110 (select * from svm910.f986110) ;

--I now build the mv commands and pop them into a .sh file for execution through STRQSH:

select 'mv /E910SYS/PRINTQUEUE/' || trim(JCFNDFUF2) || '_' || JCJOBNBR || '_PDF.pdf /E920SYS/PRINTQUEUE'
from svm920.f986110 where jcsbmdate > 117070 and jcenhv like '%910%';

This generates a pile of these (50K), I’ll do a couple of days at a time!

mv /E910SYS/PRINTQUEUE/R5642005_SIM001_13368050_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5747011_SIM001_13368051_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5543500_SIM002_13368052_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5641004_SIM901_13368053_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5642023_SIM902_13368054_PDF.pdf /E920SYS/PRINTQUEUE                        
mv /E910SYS/PRINTQUEUE/R5531410_TOP008_13368055_PDF.pdf /E920SYS/PRINTQUEUE         

Once I have my 50000 lines, I create a copyPDF.sh in a small IFS dir and paste in the contents of the above.

I then chmod 755 this file and run it through STRQSH.

Bosch!  I have 50000 PDF’s copied over to the E920 location so that my users can see their historical data.

Thick client installation not finishing–38% complete

$
0
0

Go lives provide a lot of fodder for my blog.

Lucky me is currently working through a plethora of errors and now we are working on my favourite, client install problems!

I have a issue where the installer hangs on 38% of a thick client.

I can see from the installer log (C:\Program Files (x86)\Oracle\Inventory\logs) that it’s hanging on the following process:

INFO: Username:jdeupg

INFO: 03/20/17 10:53:56.422 SetDirectoryPermissions = "C:\Windows\SysWOW64\icacls.exe""C:\E920" /inheritance:e /grant "e1local":(OI)(CI)F /t

I found a great article on MOS, that you need to change the following parameter for the oracle installer:

D:\JDEdwards\E920\OneWorld Client Install\install\oraparam.ini

JRE_MEMORY_OPTIONS=" -mx256m"

Once this was done, the installer finished without any problem.

This did occur on a fat client that already had 4 pathcodes installed, so the pressure was on.

When I cancelled the installer, I needed to change the package.inf file.  What occurs if you do not do this change is the following.

clip_image001

Note that when you bomb out the installer at this point, it leaves the following fields blank in the package.inf file (c:\E910). 

You need to ensure that these two values in your local package.inf (c:\E920) are correct

SystemBuildType=RELEASE

FoundationBuildDate=SAT DEC 03 14:47:50 2016

Note that the second field must match the date from the package_build.inf file, this is from the \\depserver\e920\package_inf dir

[Attributes]
AllPathCodes=N
PackageName=PD7031900
PathCode=PD920
Built=Build Completed Successfully
PackageType=FULL
SPEC FORMAT=XML
Release=E920
BaseRelease=B9
SystemBuildType=RELEASE
ServicePack=9.2.01.00.07
MFCVersion=6
SpecFilesAvailable=Y
DelGlblTbl=Y
ReplaceIni=Y
AppBuildDate=Sun Mar 19 09:37:20 2017
FoundationBuildDate=Sat Dec 03 14:47:50 2016

Such a pain!

ERP analytics use case #99–the upgrade

$
0
0

I’ve been working on a significant upgrade over the last couple of months and we pulled the trigger over the weekend.  Things have been going okay so far (I’m always very conservative when doing upgrades).  We’ve not had any downtime and for the main part things are working.  This is an amazing result and testament to the stability of the code coming out of JD Edwards and also the great testing from the client.

Anyway, to my point.

We had an interesting scenario where users could not use P512000 in 9.2.  I could not believe it, how could this be right.  I looked at the security records between the two releases and they were solid (the same). So I’m a bit too black and white and say “they must not have run it ever”…

I then get to google analytics for their 910 environment to see:

image

I choose security analysis, as this is the core of what I’m checking

drill down to security

image

Choose the environment

image

Note that I’m using the last 3 months of data, actually looking into over 1.5 Million page views

I search for P512000

image

I din that it’s been used – wow security was right.

And then I see the 50 users that have loaded that application in the last 3 months.  I can see when they loaded it (time of day, day of month) and also how long it took to load.

ERP analytics is making my job easier!

By the way, it turns out that the application loads P512000A when it loads…  I needed to add security for that!

Go-live infographic

$
0
0

Can you give your managers some insights into a go-live like this?

ERP analytics can give you some very unique insights into what is going on behind the scenes of an upgrade!

new-piktochart-_21210816_a8781355fce86e3561a1e98a3e15a7919801caab

rescuing E1local from a complete reinstall

$
0
0

We’ve all had it, after a package install, you cannot connect to e1local!  Arrgghh!

Some clients get it more than others, it seems that virus scanning and other specifics about the client (taking VM snaps) are killing all of the oracle databases at once.  They seem to be unrecoverable.

C:\Oracle\diag\rdbms\e1local\e1local\alert\log.xml

ocal_ora_98068.trc:
ORA-01113: file 5 needs media recovery
ORA-01110: data file 5: 'C:\E920\DV920\SPEC\SPEC_DV7012000.DBF'
</txt>
</msg>
<msg time='2017-0

And jde.log

74644/74648 MAIN_THREAD                           Thu Mar 23 15:36:51.370000    jdb_ctl.c4199
    Starting OneWorld

74644/74648 MAIN_THREAD                           Thu Mar 23 15:37:00.529000    dbinitcn.c929
    OCI0000065 - Unable to create user session to database server

74644/74648 MAIN_THREAD                           Thu Mar 23 15:37:00.530000    dbinitcn.c934
    OCI0000141 - Error - ORA-01033: ORACLE initialization or shutdown in progress
 
74644/74648 MAIN_THREAD                           Thu Mar 23 15:37:00.530001    dbinitcn.c542
    OCI0000367 - Unable to connect to Oracle ORA-01033: ORACLE initialization or shutdown in progress
 
74644/74648 MAIN_THREAD                           Thu Mar 23 15:37:00.530002    jdb_drvm.c794
    JDB9900164 - Failed to connect to E1Local

This is painful.

Make sure that the current OS user is a member of the highlighted group below

image

Make sure you are using the server based SQLPlus after changing the security in sqlnet.ora to be NTS (see previous post)

C:\Oracle\E1Local\BIN\sqlplus.exe
C:\Oraclient\product\12.1.0\client_1\BIN\sqlplus.exe

C:\Windows\system32>sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 23 15:39:25 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions

SQL> shutdown
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  3050800 bytes
Variable Size             381682384 bytes
Database Buffers          415236096 bytes
Redo Buffers                5337088 bytes
Database mounted.
ORA-01113: file 5 needs media recovery
ORA-01110: data file 5: 'C:\E920\DV920\SPEC\SPEC_DV7012000.DBF'

SQL> alter database datafile 'C:\E920\DV920\SPEC\SPEC_DV7012000.DBF' offline drop ;

Database altered.

Then impdp your datafile, you need a user to be able to do this, I created jdeupg

sqlplus / as sysdba

SQL> create user jdeupg identified by myP@55# ;

User created.

SQL> grant dba to jdeupg
  2  ;

Grant succeeded.

SQL> quit

Then at the command line

C:\Windows\system32>impdp jdeupg/myP@55# TRANSPORT_DATAFILES='C:\E920\DV920\spec\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv7012000.dmp' REMAP_TABLESPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__DV7012000:SPEC_DV701200
0 LOGFILE='impspec_dv7012000.log'

now, you need to be careful about the database directory and how this was last set.  It’ll be set to the location of the last full package.  You need to set this for the file that you are trying to rescue, in my case DV920\spec

select * form all_directories:

ORACLE_HOME    /
ORACLE_BASE    /
OPATCH_LOG_DIR    C:\Oracle\E1Local\QOpatch
OPATCH_SCRIPT_DIR    C:\Oracle\E1Local\QOpatch
OPATCH_INST_DIR    C:\Oracle\E1Local\OPatch
DATA_PUMP_DIR    C:\Oracle/admin/e1local/dpdump/
XSDDIR    C:\Oracle\E1Local\rdbms\xml\schema
XMLDIR    C:\Oracle\E1Local\rdbms\xml
ORACLE_OCM_CONFIG_DIR    C:\Oracle\E1Local/ccr/state
ORACLE_OCM_CONFIG_DIR2    C:\Oracle\E1Local/ccr/state
PKGDIR    C:\E920\UA920\data\

drop tablespace SPEC_DV7012000 including contents, when in sqlplus:

select * from all_directories ;
drop directory PKGDIR;
create  directory PKGDIR as 'C:\E920\DV910\spec ;

Note that you might need to delete the previous imp and exp log files too, I was getting:  These are .logs in the directory (DV920\spec) that you are importing to.

C:\E920\DV920\spec>impdp jdeupg/PAss# TRANSPORT_DATAFILES='C:\E920\DV920\spe
c\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv7012000.dmp' REMAP_TABLE
SPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__DV7012000:SPEC_DV7012000
LOGFILE='impspec_dv7012000.log'

Import: Release 12.1.0.2.0 - Production on Thu Mar 23 16:21:36 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit
Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation

Finally I have my ducks lined up, time to run the import

Starting "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01":  jdeupg/******** TRANSPORT_DATA
FILES='C:\E920\DV920\spec\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv
7012000.dmp' REMAP_TABLESPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__
DV7012000:SPEC_DV7012000 LOGFILE='impspec_dv7012000.log'
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29349: tablespace 'SPEC_DV7012000' already exists

Job "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01" stopped due to fatal error at Thu Mar
23 16:22:07 2017 elapsed 0 00:00:10

Crappo, need to drop this

sqlplus / as sysdba

drop tablespace SPEC_DV7012000 including contents ;

Go again:

Starting "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01":  jdeupg/******** TRANSPORT_DATA
FILES='C:\E920\DV920\spec\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv
7012000.dmp' REMAP_TABLESPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__
DV7012000:SPEC_DV7012000 LOGFILE='impspec_dv7012000.log'
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
ORA-39123: Data Pump transportable tablespace job aborted
ORA-19721: Cannot find datafile with absolute file number 13 in tablespace SPEC_
DV7012000

Job "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01" stopped due to fatal error at Thu Mar
23 16:26:29 2017 elapsed 0 00:00:03

Damn, this means that my file is truly is corrupt.  Right, grab a fresh one from the deployment server spec directory.

C:\E920\DV920\spec>impdp jdeupg/Pass# TRANSPORT_DATAFILES='C:\E920\DV920\spe
c\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv7012000.dmp' REMAP_TABLE
SPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__DV7012000:SPEC_DV7012000
LOGFILE='impspec_dv7012000.log'

Import: Release 12.1.0.2.0 - Production on Thu Mar 23 16:29:31 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit
Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions
Master table "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded

Starting "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01":  jdeupg/******** TRANSPORT_DATA
FILES='C:\E920\DV920\spec\spec_dv7012000.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_dv
7012000.dmp' REMAP_TABLESPACE=SPEC__DV7012000:SPEC_DV7012000 REMAP_SCHEMA=SPEC__
DV7012000:SPEC_DV7012000 LOGFILE='impspec_dv7012000.log'
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "JDEUPG"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Thu Mar 23
16:30:15 2017 elapsed 0 00:00:43

Working, and I can log into JDE.  What a saga.  But, now that I have the knowledge, this is going to save me time going forward.

Use case for ERP analytics #101–tracking down inappropriate activity

$
0
0

A client was reporting to me that there were some orders being placed in production from one of the testing users.  They asked me if I could tack down where this was coming from.  Traditional answer – No – I have no idea. 

This client is luck enough to have ERP analytics installed at their site.  I was able to drill a little deeper.

The first thing I did is create a custom report for tracking down user data:

image

You can see from the above that I have region, city, network domain and more for this session.

The first tab I start with

image

Find user.

So now I can look for the user, I could drill down on environment too, which means that I only get prod data

Step 1 – find my user

image

Step 2 look at environments

image

Step 3, click on PD920

image

Now I see what has been going on!

I choose my language and region tab and I get the city and region information from the client

image

Bosch, I have loads of information about the perp – give this to the client to see what is going on!

I can create another tab to see what applications they were using

image

Crime solved – what’s next ERP analytics?


JD Edwards 9.2 to live on… and on… and on…

$
0
0

image

Collaborate inspired content

The announcement, incase you missed it, is that we are going to have JD Edwards 9.2 until 2028.  Wow, the propaganda that went with the delivery was pretty awe-inspiring, but honestly what does this really mean?

Is oracle trying to squeeze out the last bit of life from JD Edwards for the smallest investment possible?  We were very used to the traditional major release model, that is https://shannonscncjdeblog.blogspot.com.au/2016/10/jd-edwards-release-datesmakes-me-feel.html every 3 or so years we were getting a major applications release.  This was good, it kept clients on their toes, kept everyone informed of the latest updates.  It was a self promotion and marketing exercise at least, as all partners went out to their clients and told them why they needed to be on the latest release….

image

I must be honest, I’m in two minds about this announcement.  I can see a lot of positive things about it – but also negative.  I’m a half glass full kind of person, so lets start with that:

The full half of the glass:

If we can change our mindset to be one of constant change and constant innovation, this announcement is cool.  We must however make the change and bring our clients along for the journey.

I always think about this as if I’m providing JD Edwards as a service for my clients (as I do) and how I can provide them a constant platform – which includes innovation.  This IS a difficult thing.  AWS gives me some amazing ability to do green / blue deployments and to test code changes slowly and constantly – but I need to build this for JD Edwards.  JD Edwards does not do this well natively.  For instance, if I was hosting JD Edwards for a client, I could easily deploy (with them knowing) and new pigeon pair web server and app server on the latest tools release alongside the current tools release (green / blue).  I’d carefully monitor performance, issues, logs and more and eventually phase all of my users over to the new tools release and they probably would not know any difference.  This ability to do constant change is made easier with the constraint less compute environment that I’m using to host JD Edwards.

The above is a small example with a big change (tools release), but changes get MUCH bigger – think application release.  Now this hypothetically could be done in a very similar way…  But, data, my single source of truth is going to be my challenge.  I could easily write triggers and routines that would synchronise (or run TC’s inline) to keep JD Edwards running between the two releases.  Wow, imagine that.  Go live for a group of users / locations…  Test and continually improve and deploy – possible.

Read all of the above and you can see that this is actually WHAT WE EXPECT.  We now expect our programs to update automatically, we expect the latest support of platforms and browsers and more importantly mobile device operating systems!  Cloud or more specifically SaaS has completely changed our expectations of large software.  We do not want to do big upgrades and create big disruptions, we want to do small and consistent upgrades with no disruptions.  I think we can do it.

If you are internal IT or you are a managed service provider, think of your JD Edwards instance as SaaS and think how you can give your customers a consistent and contiguous environment that is always up to date…  You can!  If you make the paradigm shift mentally, you can start to think creatively about how you are going to do this.  Oracle (in making this continuous improvement announcement) have forced us to think about our ERP differently, and for the better.

You are getting an environment (well this is what it feels like), where you can more easily provide a managed service.  Wow, is that what oracle is going to do eventually – hmmm, I’d think so.

There are some repetitive tasks that you will need to get better at to ensure that you are ready for continuous innovation:

  • get better at retrofit
    • code better
    • modules, reuse and more
    • know the cost of your modifications and bring it forward if there is real benefit to the business
  • ESU’s all the time – put them on a schedule
  • Get your underlying technology ready to support continuous change
    • blue / green deployment
  • invest in automated regression testing
  • monitoring is critical (how about ERP Analytics)
  • Performance testing is important

At the end of the day, JD Edwards is going to play better as a SaaS product – though it will still need to be managed closely.

 

The empty half of the glass

We need to also think critically when we get an announcement like this too.  I had a feeling that the software was being put out to pasture when I heard that there was going to be no more major releases.  It felt like this was not a change going forward, but a change to stagnate, but that is because we all resist change – it’s natural.

We’ve seen how it’s really not too easy to sometimes run blue | green deployments with some of the releases that have come out lately.  Trying to use 9.2 tools on 9.0 apps is terrible, we really hope that things are going to be architected and released in a way that can be consumed continuously.

This is a shorter paragraph, as I think that the message is overall positive.

What they hope to deliver continuously:

These shots were taken at the partner session at collaborate, so they are completely immersed in a number of safe harbor statements.  

imageimage

 

Wow, so that is a lot of enhancements that are going to be delivered into 9.2 as continuous items to 9.2.  All of the above is going to be bolted onto the current 9.2 applications release.  We are going to get tools releases, but applications release will stay at 9.2.

Conclusion

If JD Edwards can continue to innovate strongly, but continuously – all we need to do is ensure that we are also ready to consume this at a rapid pace and allow our users to benefit from it. 

The JD Edwards cadence of innovation has been exceptional, they are continuing to provide the tools to enable a business to make their digital transformation to a maturity level that is appropriate for the reason that they exist.  This is a deep statement, but the level of digitalization that an organisation can achieve is governed by what the company does and the reason that they exist. 

I look forward to continuing to architect systems for my clients that are future proof, that will embrace this announcement and allow the customers to actually benefit from it now and onwards to 2028…

More information about continuous delivery

$
0
0

It comes in a number of permutations and combinations, continuous innovation, continuous change, constant innovation… it all means one thing to JD Edwards, Continuous Delivery.

I’ve been trying to track down the official content relating to this announcement from oracle, and I’ve come up trumps with the following MOS (https://support.oracle.com) article.  Important Change to Oracle's Lifetime Support - Extending Premier Support for JD Edwards Latest Releases (Doc ID 2251064.1)

This contains 3 things that you need to read:

1.  An FAQ on the announcement.  You need to read this.  I’m not taking any credit for this, just list out what is in the announcement PDF.  I would urge you to go to the source documents too, as they might change.

Q: What are we announcing?

A: We are extending the Premier Support period for the latest releases of JD Edwards EnterpriseOne and World. We will review annually the support time line and extend Premier Support based on market conditions, customer activity, and release activity against that code line.

Q: What are the new support dates?

A: For JD Edwards EnterpriseOne 9.2: Premier Support is effective through October 2025 and Extended Support through October 2028. For JD Edwards World A94: Premier Support is effective through April 2022 and Extended Support through April 2025. Oracle Lifetime Support Policy for Oracle Applications

Q: Why are you making this change now?

A: In conversation with customers, we became aware of situations where they were delaying the decision to upgrade to the EnterpriseOne 9.2 release because it would result in a relatively short Premier Support window (The previous Premier Support end date for EnterpriseOne 9.2 was October 2020). In addition, customers are interested in a solution that provides a solid upgrade ROI; a release with a significantly longer Premier Support horizon than 2020 delivers that ROI. Oracle wants to reassure our JD Edwards customers that they can continue to run the current release of JD Edwards applications with ongoing support and enhancements through at least April 2025 for World Release A9.4 and through at least October 2028 for Release 9.2.

Q: What if I’m a customer who has already upgraded to 9.2?

A: Our discussions with existing EnterpriseOne 9.2 customers show that they have increased confidence because they now have an expanded support window for 9.2 and will be able to adopt new capabilities via easier-to-adopt updates.

Q: Does this mean you will no longer deliver enhancements for JD Edwards EnterpriseOne?

A: Absolutely not. We have a very active product roadmap and will continue delivering enhancements regularly, along with maintenance, legislative, and technology improvements. Given the changing market needs and consistent feedback from our customers that they need enhancements sooner and in easier-to-consume models, we will be delivering these as feature packs and updates on the EnterpriseOne 9.2 code line. We are referring to this approach as Continuous Delivery. This should be nothing new to our existing customers who are on the EnterpriseOne 9.2 release. We have delivered five releases (in the form of feature packs and/or updates) for the 9.2 code line since the general availability of EnterpriseOne 9.2 in October 2015. See JD Edwards Product Roadmap

Q: How about JD Edwards World?

A: JD Edwards World Release A9.4 will follow a similar model. Any new enhancements, legislative/regulatory updates, and technology improvements will be delivered on the A9.4 code line.

Q: Will you deliver another major release?

A: Yes, that is in our plan and roadmap. We are not communicating a specific year for that next major release at this time and will be focusing on delivering new release enhancements along with maintenance on the 9.2 code line. The move to a Continuous Delivery model is driven by the needs of our customers and the existing market conditions. We will continue to monitor a number of factors to make the best decision for our customers. For example, a very large functional or technology change that cannot be delivered effectively as an update or feature would lead us to consider a new code line split and a major release.

Q: How have customers responded to this change?

A: Very positively! This change has given them even more choice and control. They like the added flexibility this gives them in terms of when to adopt new (update) releases, the expanded support window, and a simpler approach to maintaining their JD Edwards environments. Customers also like not having to budget or plan for a major upgrade. They can choose and control when to add new functionality, and it is easier, less disruptive, and faster to implement and adopt.

Q: What are the key advantages for customers?

A: Continuous Delivery gives our customers a tool to better align IT and the line of business organizations they support by scheduling the adoption of updates based on how they best serves the business rather than on an end-of-support date.

Q: Will Oracle end support for EnterpriseOne in 2028 and for World in 2025?

A: No. We will evaluate the support dates annually and determine when it makes sense to extend the Premier and Extended Support time horizons. Other Oracle Application lines follow a similar model. To be clear, this is the longest support timeline published by any ERP vendor.

Q: How often do you plan to deliver new feature and function packs?

A: We plan to deliver new updates or feature packs two to three times per year for EnterpriseOne 9.2, and as needed for World A9.4.

Q: Do you still plan to deliver the next major release of JD Edwards EnterpriseOne (e.g., 9.3) in 2018 (approximately 3 years after the GA of 9.2)?

A: No. Because we have significantly enhanced our software delivery tooling and processes, we are no longer bound to delivering enhancements in major releases. We have already been delivering enhancements as updates to the existing EnterpriseOne 9.2 release: This approach allows our customers to take up enhancements when they meet specific business needs without the cost and disruption of a major upgrade. We plan to continue following the model of delivering updates on the 9.2 code line; our customers are also moving towards this model as a standard method for adopting new technology and feature/functions. However, as stated above, we will continue to monitor the need to deliver a major release.

Q: Why is the support timeline for World A9.4 shorter than EnterpriseOne 9.2?

A: This decision was driven by the needs of our customers and the current market conditions. Based on discussions with our World customers, most of them are considering migration to the EnterpriseOne product suite with a possible mix of additional Oracle products. Given the large footprints of several of our World customers, they need a longer time window to plan and execute this transformation. We will continue to monitor this migration and will make revisions based upon the needs of our customers.

Q: Can customers simply upgrade to EnterpriseOne 9.2 and forget about it for the next 5 - 8 years because it will be covered by Premier Support until at least 2025?

A: As a best practice, we recommend that customers maintain their environment and stay current on the 9.2 code line by taking regular updates. Using this methodology will make software updates routine and predictable, if or when customers need a new enhancement to support their line of business or need a technology uplift, for example to support a new browser or database version. However, customers still have the choice and control over how frequently and when they get code-current based on their business needs and cycles. The Continuous Delivery model will require a shift in how customers maintain their JD Edwards environments, and we have a variety of purpose-fit tools that allow customers to evaluate and adopt these updates. See the Analyze Your Installation Before Upgrading section on the EnterpriseOne Upgrade Resouces page on LearnJDE.

Q: What is Continuous Delivery?

A: Continuous Delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production.

Q: Why is Continuous Delivery the right approach for JD Edwards customers?

A: Customer expectations have changed in terms of how they consume new versions of software. These expectations are based upon their experience with cloud-based applications and consumer devices such as smartphones. With Continuous Delivery, customers get timely JD Edwards product innovations to respond to their business needs, without the cost and potential disruption of a major upgrade. Customers no longer want to wait several years to get a new set of features. Our customers’ business world is changing so rapidly that they cannot afford to wait multiple years to receive updates to their enterprise software. These incremental updates are easier to consume, enabling customers to shorten time-to-value cycles.

2.  Product Road Map (pretty light, but you get the picture)… 

imageimage

3.  Oracle lifetime support policy for Oracle Applications (support matrix)

image

image

ERP analytics post go-live, what can we learn?

$
0
0

Recently a client of mine went live on 9.2.  Everything has gone swimmingly and I like to produce an infographic on the success of the project.  I’m able to farm various metrics to give a unique perspective on the go-live from a technical point of view.

anon-month-1_anon

I put a good quality version here, https://drive.google.com/open?id=0B30UFGvbR-EjNmhRc2hsMzB1UG8

Once again, it all looks good until we look at the page load time, this has increased on average from .8 seconds to 1.2 seconds – this seems to indicate that we have some work to do with our web logic setup.  It’s interesting to point out that this was not an increase that was picked up by the user community.  This was not an issue that the project was looking at, but now they are.

ERP analytics has been able to show a real difference in the interactive performance, which will now be addressed and also this will be quantified with analysis of this data.

The following  shows more details that ERP analytics can give you, which shows that the majority of the additional time is in the server responding (3 times slower).  the network speed (page download time) is about the same.  The average page load time is higher, which means that the browser is taking the balance of the time to render the pages.  So – is JDE sending more complex pages to the browser in 9.2 as compared with 9.1?

clip_image001

image

For this particular issue, we are going to focus on the web servers initially and try and get back some of the time there.

Testing AIS, I mean really testing AIS

$
0
0

I’ve said it before and I’ll say it again, you must start using AIS for your integrations with JD Edwards.  It’s light and easy, I want to show you how you can test that things are working, even more than the traditional defaultconfig.

Firstly, myriad-it and now fusion5 generously provide an up-to-date AIS server for you to poke around with:

https://myais.myriad-it.com:9090 that is cool, so if you use your browser – and goto:

https://myais.myriad-it.com:9090/jderest/defaultconfig you’ll see something like:

image

So that is cool, but really, tests nothing!

So let’s test a bit more, like logging into JDE

Remember, if you don’t have an account to the demo site – get one here: https://e92demo.myriad-it.com/ 

I’m going to do things locally now, check server manager for the rest end point:

image

Cool, let’s log in

I have a chrome extension called “Simple REST Client”

image

Now I can set a payload and run

URL:  http://e1ent-dnt.mits.local:9090/jderest/tokenrequest

{
    "deviceName":"MyDevice",
    "username":"JDE",
    "password":"xxxxx"
}
Content-Type: application/json

operation: post

reply – 200!

{"username":"JDE","environment":"JPY910","role":"*ALL","jasserver":"http://e1ent-dnt.mits.local:9081","userInfo":{"token":"044KpVeur4yjmrAm+i9TWBUAMeli2Vpe7yU3X5uz9MUtKc=MDIwMDA4LTI3MzNDA3OTY2ODQ4OTU0MDhNeURldmljZTE0OTQ0NjMzODgzMDQ=","langPref":"  ","locale":"en","dateFormat":"DMY","dateSeperator":"/","simpleDateFormat":"dd/MM/yy","decimalFormat":".","addressNumber":1001,"alphaName":"M Dynamax","appsRelease":"E910"},"userAuthorized":false,"version":null,"poStringJSON":null,"altPoStringJSON":null,"aisSessionCookie":"pr1mZTzc5Hn1v3r5z2j63SKlH2qr2xzdXdV0vHXjnQzsnlxPc6!-1457766052!1494463388319"}

image

Cool…  So now we can start to actually do something.

 

Now let’s do something decent, how many waiting jobs are in the system:

image

help about for the form details

Now, help about for the AIS control ID

Activate item help

image

 

image

Great, let’s see the column for status too.

image

Okay, we have this.. now let’s create a query

we call http://e1ent-dnt.mits.local:9191/jdrest/formservice with the following

Note that you need to copy and paste the token from your tokenrequest into this payload.

{
    "token": "044yU/fNj6Fx1YGDjAteYZHlfGxfu3XEXeqtFxjpZoEtE=MDIwMDA4LTI0MjU0NzM0Mjk0NzAzNzQ1NTFNeURldmljZTE0OTQ0NjU0OTE1NjI=",
    "version": "ZJDE0001",
    "formActions": [
        {
 
            "command": "SetQBEValue",
            "value": "D",
            "controlID": "1[7]"
        },
        {
            "command": "SetControlValue",
            "value": "*",
            "controlID": "29"
        },
        {
            "command": "DoAction",
            "controlID": "23"
        }
    ],
    "deviceName": "MyDevice",
    "formName": "P986110B_W986110BA_ZJDE0001"
}

Wow, this gives you  a JSON representation of the form data after the find has been pressed.

So really, you’ve just tested everything about your AIS installation.  You know that you can login and you know that the formservice is working.  Now you can hook up some mobile apps with confidence.  Oh, you also know how to identify controls on a form and set controls and QBE Values and also perform actions.  Nice!

ERP analytics is now self service

$
0
0

Gone is the old days of exchanging tools releases for ERP analytics enablement, now you can self service the entire process…  Billing also!

We’ve been working on a completely digital process, where subscribers can implement ERP analytics themselves.

you can start here:  https://s3-ap-southeast-2.amazonaws.com/erpanalytics/index.html and follow the bouncing ball.

Once you enter your subscription details, the process will allow you to upload your tools and will automatically patch it.

Also, after you are subscribed, you can patch any tools release, any time.

So, if you what to know who’s doing what

image

Slowest users

image

Slowest apps

image

Most time spent on page

image

 

And have this carved up by browser, city, state, day or date – you can.

take a look at what is possible https://s3-ap-southeast-2.amazonaws.com/erpanalytics/documentation.html

It’s also important to remember, that if you are a partner and see value in this information for your clients (which you should), then you can join our growing channel program.  We are providing these insights to many clients around the world.  You get access to all of our self service patching routines and detailed reports.

Rapid adoption of continuous delivery

$
0
0

We are in the throws of putting together a framework for assisting our clients adopt continuous delivery – we are facilitating this with a number of core offerings.

First and foremost (as you are aware) ERP analytics allows us to identify what is being used, and at the end of the day, retire technical debt.  We can tell you what is being used and by whom.  So armed with this knowledge you can fine tune your retrofit – and you will need to do this!

Secondly we have some software that actually shows the controls that have changed between environments.  It can run over any number of environments and tell you exactly what is different on a form by form and control by control basis.  Therefore, if a row exit is missing, or a field has changed it’s name – we can quickly tell you.  This is going to assist in lowering the amount of testing that you have to do and improve the maturity of what you are releasing to the business for testing.  This is like an advanced impact analysis tool.

Thirdly we implement a blue/green deployment model, as you need to become more agile.  I hear you saying, we are “old school ERP”, we are waterfall…  We don’t make mistakes because we spend months in regression testing. You cannot do this anymore!  You need to be more efficient with your releases and with your testing.  It’s critical to be agile with mod deployment and do an element of “production testing”.  This can be controlled easily and the benefits are huge!

These three simple initiatives combined with project management which is modification centric (based from ERP analytics) allows you to define your “continuous delivery” project.  The process can be summarised as below.

image

I recommend implementing a 3 month / 4 month cycle – planned for the entire year with all of your release dates with the following high level steps:

Search for ESU’s monthly – change assistant

•Apply them to DV920 no matter what

•Impact analysis based upon ERP analytics

•Change documentation automatically generated

Apply to PY920

•Automatically request retrofit – create projects, add objects, workflow development!

•Retrofit needs to be very modular

•Perhaps look at retrofit more carefully (or completely bespoke)

Testing

•Automated regression testing

•Ensure mods are tested, check with ERP analytics

Release to production

•If problems, regress and fix

•Regression can be a package deployment away!

 

It really is important that we start to treat our ERP a little differently.  Its critical and it’s generally a single source of truth, but we must continue to deliver improvements to our businesses with agility.  Oracle are giving us the ability to do this, we need to embrace it.  I like to think of delivering to clients as if it were SaaS.

Worried about oracle license usage or audit in JDE? LMS

$
0
0

Your JD Edwards licenses are generally limited by a named user.  So when you look at your agreements, how can you be confident that you only have 25 users logging using Accounts Payable for example?

This is easy when using ERP analytics.  We can configure report to tell you the number of unique usernames that have used certain system codes in the last X days / weeks / months.

I’m going to create a new report for this purpose

image

image

There is a lot going on above, but let me explain the bigger pieces.

  • This is a new report for any date range, the user specifies this
  • I’m choosing to report on pageviews, as I do not really care about performance or engagement for this report
  • I’m choosing drilldown dimensions of Environment (as I only want to look at prod)
  • My secondary dimension is app name, note that I’m using the application analysis view, which contains app name
  • I’ve then applied a regex filter to ensure that I only get apps that contain ^P04 (starting with P04) https://support.google.com/analytics/answer/1034324?hl=en

image

This is cool, I see that my filter is applied from the beginning, so this is ONLY the data that has been used from system code “04” for my current date range.  I can then drill down to my production environment:

image

Okay, so I can see the programs and how often they are loaded, this is nice.

But I want to see unique users.

image

I create a new tab, I like to report using this method – it’s reusable.  So I create my users tab and select the “user” dimension for drilldown.

Save and run

image

Select my environment again

image

So I can quickly see my applications and unique list of users that have used them in the last week, 62

Let’s see for the last 2 months

image

We change the date range

image

Great, we can see that there are 135 unique AP users over the last 3 months.  We know the busiest users and we know the busiest applications.  We can do this for any system code that we like.

This information is VERY easy to acquire and is the only true way of knowing the unique user counts of your JD Edwards applications.


Outlook, I don’t have contacts I have type ahead!

$
0
0

Wow, I find that I treat type ahead like my personal address book.  Recently we changed domain (myriad-it.com to fusion5.com) and found that my type ahead seriously did not know what was going on.

The information is stored in a binary file here for 2010 and above, thanks https://www.slipstick.com/outlook/email/understanding-outlooks-autocomplete-cache-nk2/ 

C:\Users\username\AppData\Local\Microsoft\Outlook\RoamCache

Some more googling found this:  https://support.microsoft.com/en-us/help/2199226

download this cool app to actually turn the binary file into something that is readable.

https://github.com/stephenegriffin/mfcmapi/releases/tag/17.0.17099.01

image

Run it

image

Session-Logon

image

Double click your mailbox

image

Right click inbox under the IPM_SUBTREE

image

Locate auto complete in the subject, choose export message

image

image

Name it

image

The open a command line:

cat the xml file through findstr and you’ll get a list – job done.  All of the email addresses in type ahead.  Now delete your cache.

or  type TypeAhead.xml | findstr /c:"szEmail"> emails.txt

To delete the cache in modern outlook

file–>options

image

The choose mail

image

Empty type ahead!

Let’s expand on AIS and JDE–api.ai; meet Eddie the JDE bot!

$
0
0

Team innovation at fusion5 have been working overtime to come up with a very cool (cool is a subjective term, as we all know) for communicating with JD Edwards.  Let’s use a slack bot that is hooked into JDE via AIS and api.ai.

I know that this seems complicated, but there is not lots of code to it, we’ve done all of this (this week)…  You can see my conversation below:

image

Our slack bot authenticates to JDE using oauth or SAML3 – so this is a nice easy way of making sure that things are secure. 

Okay, so this is really cool.  In our particular example we are able to approve, reject and list all of our purchase orders in JDE.  This is for the user that is currently logged in.  We can do this in natural language and have a conversation with the bot, like here:

API.AI Bot APP [4:13 PM]
Sup

shannonm [4:14 PM]
any PO's today?


API.AI Bot APP [4:14 PM]
You have 3 purchase orders awaiting approval.

shannonm [4:14 PM]
cool


API.AI Bot APP [4:14 PM]
Rad!

shannonm [4:14 PM]
show me the first PO


API.AI Bot APP [4:14 PM]
The 1st is for 3003.74 USD, added by SYED ALTHAF GAFFAR. It is for order 658, and requested on Thursday, 2nd of February.

shannonm [4:14 PM]
wow


API.AI Bot APP [4:14 PM]
You are easily impressed

shannonm [4:14 PM]
approve


API.AI Bot APP [4:14 PM]
Please enter your reason for approving this Purchase Order

shannonm [4:14 PM]
I think that seems like a good use of company funds


API.AI Bot APP [4:14 PM]
(Mock) The purchase order has been approved using I think that seems like a good use of company funds

shannonm [4:20 PM]
Any po s for approval


API.AI Bot APP [4:20 PM]
You have 3 purchase orders awaiting approval.

shannonm [4:21 PM]
About purchase order number one


API.AI Bot APP [4:21 PM]
The 1st is for 3003.74 USD, added by SYED ALTHAF GAFFAR. It is for order 658, and requested on Thursday, 2nd of February.

Great, but it just get’s better and better.  The power of slack means that I can also just run this on my phone:

Screenshot_2017-05-18-16-21-33

So wait, I’ve just created a mobile application on my phone that will approve my purchase orders this simply?  Yes.  You can also use voice for this, as every device has a built in voice keyboard – yes. 

Of course it’s just as easy to use microsoft teams for a more professional conversation:

image

Wow, so you can have a conversion with your mobile phone (driving to work), and do all of your PO approvals.

We can even ask the bot (by voice) to send us the PO attachments as an email, as you might want to verify them – no problems – this is what the bot will do!

We’ve also got this bot working with google home – a virtual assistant.

googlehome

IT just talks to you about anything JDE that your heart desires.  There is an easy extension to any customer service type applications, a website bot for example.  Your customers could log in and ask about the progress of an order, the bot could ask a few questions and tell them exactly what is going on.  This is a very simple way of giving more to your clients with less.

There are some really cool applications for voice control and JD Edwards.  Think about all of the dirty hands data entry scenarios…  Meter readings for example.  We also have a bot that will tell you meter readings for an asset and allow you to update them with voice.  This is an amazingly pragmatic use case for voice and JDE, as when you are wearing gloves – you don’t want to enter data on a mobile keyboard.  WOW.

Imagine the logical extension, as we’re also working with estimote beacons. 

Image result for estimote

Imagine that your  fleet had beacons…  Imagine that your mobile JD Edwards application listened for beacons because you were running your “fixed assets” mobile application.  Therefore your mobile device knows the asset that you are working on.  You could ask it “what maintenance needs to be done today?”, perform meter readings, record time worked – everything done because of your proximity to a beacon.  This makes the data entry process much more efficient and less prone to error.

You have a beacon that runs for 5 years on it’s own battery.  Can tell you light, barometric conditions, temperature and more…  This beacon transmits this information to your mobile device.  You’re mobile device then transmits this data “IoT” style to your data warehouse in the cloud for analytics and insights.

A little help with interpreting oracle performance and JDE batch jobs (UBEs)

$
0
0

Example 1: Logic bound

If you see something like this, it’s not necessarily bad.

image

You’ll notice that the duration is 20m, but the database time is 8.8 seconds.  This is NOT causing the database grief, and it probably only in a loop in the UBE.

image

This means that the UBE is in the main loop and might be crunching some logic or perhaps some smaller sub queries based upon the main query.

Example 2: Database busy

But, if we look at another statement, we can see that the database time and the execution time are about the same.  So this one is hurting the DB, and I bet that the UBE is doing nothing but waiting.  We can also see that the blue is IO and the green is CPU, so this is doing a lot of disk I/O.

image

We can see that this is a runbatch process on bear, and we need to drill down to get the PID

Drilling down on the user session give us:

image

Great, so now I know the OS process id, and will look into this on bear – but it’s not at the top of top.  remember our theory that it’ll be waiting.

image

ps –ef |grep 15928

When we find it, we can see that it’d hiding…

jde900   15928  9313  0 16:05 pts/1    00:00:00 runbatch JDE pgAAAAQDAgEBAAAAvAIAAAAAAAAsAAAABABTaGRyAk4Acwg4AC4AMQAwABSBWpC+uTe3+KbaUoYl6eW/UPqPT2YAAAAFAFNkYXRhWnicHYpBDkAwFEQfGqew6AVIFWFLiESErV0P4XoOZ9qfzJuXzH8BU+RZpv5y0pUHKxvGYqkCOyc3C7P6ksUtKA+9x9EyUqsH0dMn72jkQ6IT4xo/J+38q3ML4A== PY900 *ALL 12481590 /u01/jdedwards/e900/PrintQueue/R56120_F5004_12481590_PDF

Interestingly this process is doing nothing, so it is waiting on the SQL

Some really cool information you can see when drilling down into the session is:

image

That’s all the SQL that it’s run from inception, as a batch process is a new process.  Note that this does not have any of the system or security statments, as they go to a different instance.

This is a really nice way of being able to track back what the UBE / SQL is doing and who is waiting on what.

 

Example 3: Logic bound

once again, this is a job that has a long duration, but only 21 seconds of database time…  So really, I’m not worried about it.  It does give me a good chance to explain a number of aspects about it.

image

As I suspected, this is doing a lot of work at the UBE server end.

image

We can use the process ID field in WSJ to find the batch job we are dealing with… easy…

image

Now we can look at execution detail to see the main select loop.

image

We see that this has not executed a single loop (not too sure about that)

image

Wow, Enterprise Manager tells us the same SQL is running, that is handy.

image

We can also see when we look at the detail of the session, that it’s also running some updates and selects on other tables within the main loop.  This is what we do not see in JDE execution detail.  We can also see that this is the update that seems to be taking the time.

image

We can see that this is probably a loop over the F0150 and then processing updates to the F03B11.

You can find this under session details –> activity tab

image

Despite that this is taking the main lions share of the inner activity, it’s not doing that much in the scheme of things:

image

So I do not mind about this either.

while I’m monitoring performance, this is a bug I think…

$
0
0

Don’t worry about this one either…

This is either the standard JDE interface, or going to WSJ:

image

or

image

You see that the session has not been “hung up” properly from the client.

The duration is 15 minutes, but .4s on the database

image

 

image

You can also see from above that the database is waiting for a message from the client…  interestingly the object is the user overrides index, so this might need to be looked at in more detail.  Logging out of JDE still leaves this hanging.

Again, not to sorry – but a shame it looks like it’s causing problems – when it is not.

image

If you come across this situation, clicking on the SQL ID will not help, click on the user and look into that information.

Easy to ignore when you know what is going on.

oracle OLTP compression, F42199 and the missing compression

$
0
0

It’s like a murder mystery…

I’ve been doing some OLTP compression for a client, because they have too much data – and JDE loves whitespace.

So I whoop up some great statements.

create table TESTDTA.F42199SRM as select * from TESTDTA.F42199 where 1=0;
alter table TESTDTA.F42199SRM compress for OLTP;
alter table TESTDTA.F42199SRM move tablespace TESTDTAT;
alter table TESTDTA.f42199SRM NOLOGGING;
insert into TESTDTA.F42199SRM select * from TESTDTA.F42199;

All good, let’s check the size of the tables – still 320 GB…

image

How’s this for a table that works (F0911)

image

F0911 went from 381GB to 21GB…  WHAT!!!!  I’ll take a CPU hit any day of the week to be able to load the entire F0911 in 21GB!  The table above shows ORADTA uncompressed and CRPDTA and TESTDTA compressed.

Hmm, so it takes an hour to run… I get no errors…  But my tables is still 231GB…  Something is wrong.  then I remember reading something about not working if the table has more than 256 columns and I see that F42199 has 266…  Doh!!!

What am I going to do?

It seems simple at first, create an updateable view, nice..  works perfectly expect – when trying to insert the data.

I create a couple of smaller tables and then define the view to be a select over the top of them.  Then insert all of the data and I’m done!

CREATE TABLE "TESTDTA"."F42199_T1"
   (    "SLKCOO" NCHAR(5) NOT NULL ,
    "SLDOCO" NUMBER NOT NULL ,
    "SLDCTO" NCHAR(2) NOT NULL ,
    "SLLNID" NUMBER NOT NULL ,
    "SLSFXO" NCHAR(3),
    "SLMCU" NCHAR(12),
    "SLCO" NCHAR(5),
    "SLOKCO" NCHAR(5),
    "SLOORN" NCHAR(8),
    "SLOCTO" NCHAR(2),
    "SLOGNO" NUMBER,
    "SLRKCO" NCHAR(5),
    "SLRORN" NCHAR(8),
    "SLRCTO" NCHAR(2),
    "SLRLLN" NUMBER,
    "SLDMCT" NCHAR(12),
    "SLDMCS" NUMBER,
    "SLAN8" NUMBER,
    "SLSHAN" NUMBER,
    "SLPA8" NUMBER,
    "SLDRQJ" NUMBER(6,0),
    "SLTRDJ" NUMBER(6,0),
    "SLPDDJ" NUMBER(6,0),
    "SLADDJ" NUMBER(6,0),
    "SLIVD" NUMBER(6,0),
    "SLCNDJ" NUMBER(6,0),
    "SLDGL" NUMBER(6,0),
    "SLRSDJ" NUMBER(6,0),
    "SLPEFJ" NUMBER(6,0),
    "SLPPDJ" NUMBER(6,0),
    "SLVR01" NCHAR(25),
    "SLVR02" NCHAR(25),
    "SLITM" NUMBER,
    "SLLITM" NCHAR(25),
    "SLAITM" NCHAR(25),
    "SLLOCN" NCHAR(20),
    "SLLOTN" NCHAR(30),
    "SLFRGD" NCHAR(3),
    "SLTHGD" NCHAR(3),
    "SLFRMP" NUMBER,
    "SLTHRP" NUMBER,
    "SLEXDP" NUMBER,
    "SLDSC1" NCHAR(30),
    "SLDSC2" NCHAR(30),
    "SLLNTY" NCHAR(2),
    "SLNXTR" NCHAR(3),
    "SLLTTR" NCHAR(3),
    "SLEMCU" NCHAR(12),
    "SLRLIT" NCHAR(8),
    "SLKTLN" NUMBER,
    "SLCPNT" NUMBER,
    "SLRKIT" NUMBER,
    "SLKTP" NUMBER,
    "SLSRP1" NCHAR(3),
    "SLSRP2" NCHAR(3),
    "SLSRP3" NCHAR(3),
    "SLSRP4" NCHAR(3),
    "SLSRP5" NCHAR(3),
    "SLPRP1" NCHAR(3),
    "SLPRP2" NCHAR(3),
    "SLPRP3" NCHAR(3),
    "SLPRP4" NCHAR(3),
    "SLPRP5" NCHAR(3),
    "SLUOM" NCHAR(2),
    "SLUORG" NUMBER,
    "SLSOQS" NUMBER,
    "SLSOBK" NUMBER,
    "SLSOCN" NUMBER,
    "SLSONE" NUMBER,
    "SLUOPN" NUMBER,
    "SLQTYT" NUMBER,
    "SLQRLV" NUMBER,
    "SLCOMM" NCHAR(1),
    "SLOTQY" NCHAR(1),
    "SLUPRC" NUMBER,
    "SLAEXP" NUMBER,
    "SLAOPN" NUMBER,
    "SLPROV" NCHAR(1),
    "SLTPC" NCHAR(1),
    "SLAPUM" NCHAR(2),
    "SLLPRC" NUMBER,
    "SLUNCS" NUMBER,
    "SLECST" NUMBER,
    "SLCSTO" NCHAR(1),
    "SLTCST" NUMBER,
    "SLINMG" NCHAR(10),
    "SLPTC" NCHAR(3),
    "SLRYIN" NCHAR(1),
    "SLDTBS" NCHAR(1),
    "SLTRDC" NUMBER,
    "SLFUN2" NUMBER,
    "SLASN" NCHAR(8),
    "SLPRGR" NCHAR(8),
    "SLCLVL" NCHAR(3),
    "SLCADC" NUMBER,
    "SLKCO" NCHAR(5),
    "SLDOC" NUMBER,
    "SLDCT" NCHAR(2),
    "SLODOC" NUMBER,
    "SLODCT" NCHAR(2),
    "SLOKC" NCHAR(5),
    "SLPSN" NUMBER,
    "SLDELN" NUMBER,
    "SLTAX1" NCHAR(1),
    "SLTXA1" NCHAR(10),
    "SLEXR1" NCHAR(2),
    "SLATXT" NCHAR(1),
    "SLPRIO" NCHAR(1),
    "SLRESL" NCHAR(1),
    "SLBACK" NCHAR(1),
    "SLSBAL" NCHAR(1),
    "SLAPTS" NCHAR(1),
    "SLLOB" NCHAR(3),
    "SLEUSE" NCHAR(3),
    "SLDTYS" NCHAR(2),
    "SLNTR" NCHAR(2),
    "SLVEND" NUMBER,
    "SLCARS" NUMBER,
    "SLMOT" NCHAR(3),
    "SLROUT" NCHAR(3),
    "SLSTOP" NCHAR(3),
    "SLZON" NCHAR(3),
    "SLCNID" NCHAR(20),
    "SLFRTH" NCHAR(3),
    "SLSHCM" NCHAR(3),
    "SLSHCN" NCHAR(3),
    "SLSERN" NCHAR(30),
    "SLUOM1" NCHAR(2),
    "SLPQOR" NUMBER,
    "SLUOM2" NCHAR(2),
    "SLSQOR" NUMBER,
    "SLUOM4" NCHAR(2),
    "SLITWT" NUMBER,
    "SLWTUM" NCHAR(2),
    "SLITVL" NUMBER,
    "SLVLUM" NCHAR(2),
    "SLRPRC" NCHAR(8),
    "SLORPR" NCHAR(8),
    "SLORP" NCHAR(1),
    "SLCMGP" NCHAR(2),
    "SLGLC" NCHAR(4),
    "SLCTRY" NUMBER,
    "SLFY" NUMBER,
    "SLSO01" NCHAR(1),
    "SLSO02" NCHAR(1),
    "SLSO03" NCHAR(1),
    "SLSO04" NCHAR(1),
    "SLSO05" NCHAR(1),
    "SLSO06" NCHAR(1),
    "SLSO07" NCHAR(1),
    "SLSO08" NCHAR(1),
    "SLSO09" NCHAR(1),
    "SLSO10" NCHAR(1),
    "SLSO11" NCHAR(1),
    "SLSO12" NCHAR(1),
    "SLSO13" NCHAR(1),
    "SLSO14" NCHAR(1),
    "SLSO15" NCHAR(1),
    "SLACOM" NCHAR(1),
    "SLCMCG" NCHAR(8),
    "SLRCD" NCHAR(3),
    "SLGRWT" NUMBER,
    "SLGWUM" NCHAR(2),
    "SLSBL" NCHAR(8),
    "SLSBLT" NCHAR(1),
    "SLLCOD" NCHAR(2),
    "SLUPC1" NCHAR(2),
    "SLUPC2" NCHAR(2),
    "SLUPC3" NCHAR(2),
    "SLSWMS" NCHAR(1),
    "SLUNCD" NCHAR(1),
    "SLCRMD" NCHAR(1),
    "SLCRCD" NCHAR(3),
    "SLCRR" NUMBER,
    "SLFPRC" NUMBER,
    "SLFUP" NUMBER,
    "SLFEA" NUMBER,
    "SLFUC" NUMBER,
    "SLFEC" NUMBER,
    "SLURCD" NCHAR(2),
    "SLURDT" NUMBER(6,0),
    "SLURAT" NUMBER,
    "SLURAB" NUMBER,
    "SLURRF" NCHAR(15),
    "SLTORG" NCHAR(10),
    "SLUSER" NCHAR(10),
    "SLPID" NCHAR(10),
    "SLJOBN" NCHAR(10),
    "SLUPMJ" NUMBER(6,0) NOT NULL ENABLE,
    "SLTDAY" NUMBER NOT NULL ENABLE,
    "SLSO16" NCHAR(1),
    "SLSO17" NCHAR(1),
    "SLSO18" NCHAR(1),
    "SLSO19" NCHAR(1),
    "SLSO20" NCHAR(1),
    "SLIR01" NCHAR(30),
    "SLIR02" NCHAR(30),
    "SLIR03" NCHAR(30),
    "SLIR04" NCHAR(30),
    "SLIR05" NCHAR(30),
    "SLSOOR" NUMBER(15,0),
    "SLVR03" NCHAR(25),
    "SLDEID" NUMBER,
    "SLPSIG" NCHAR(30),
    "SLRLNU" NCHAR(10),
    "SLPMDT" NUMBER,
    "SLRLTM" NUMBER,
    "SLRLDJ" NUMBER(6,0),
    "SLDRQT" NUMBER,
    "SLADTM" NUMBER,
    "SLOPTT" NUMBER,
    "SLPDTT" NUMBER,
    "SLPSTM" NUMBER,
    "SLXDCK" NCHAR(1)
     ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
COMPRESS FOR OLTP NOLOGGING
  STORAGE(INITIAL 4294967296 NEXT 209715200 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SSDMAX"
  PARALLEL ;


  CREATE TABLE "TESTDTA"."F42199_T2"
   (    "SLKCOO" NCHAR(5) NOT NULL ,
    "SLDOCO" NUMBER NOT NULL ,
    "SLDCTO" NCHAR(2) NOT NULL ,
    "SLLNID" NUMBER NOT NULL ,
      "SLUPMJ" NUMBER(6,0) NOT NULL ENABLE,
    "SLTDAY" NUMBER NOT NULL ENABLE,
      "SLXPTY" NUMBER,
    "SLDUAL" NCHAR(1),
    "SLBSC" NCHAR(10),
    "SLCBSC" NCHAR(10),
    "SLCORD" NUMBER,
    "SLDVAN" NUMBER,
    "SLPEND" NCHAR(1),
    "SLRFRV" NCHAR(3),
    "SLMCLN" NUMBER,
    "SLSHPN" NUMBER,
    "SLRSDT" NUMBER,
    "SLPRJM" NUMBER,
    "SLOSEQ" NUMBER,
    "SLMERL" NCHAR(3),
    "SLHOLD" NCHAR(2),
    "SLHDBU" NCHAR(12),
    "SLDMBU" NCHAR(12),
    "SLBCRC" NCHAR(3),
    "SLODLN" NUMBER,
    "SLOPDJ" NUMBER(6,0),
    "SLXKCO" NCHAR(5),
    "SLXORN" NUMBER,
    "SLXCTO" NCHAR(2),
    "SLXLLN" NUMBER,
    "SLXSFX" NCHAR(3),
    "SLPOE" NCHAR(6),
    "SLPMTO" NCHAR(1),
    "SLANBY" NUMBER,
    "SLPMTN" NCHAR(12),
    "SLNUMB" NUMBER,
    "SLAAID" NUMBER,
    "SLPRAN8" NUMBER,
    "SLSPATTN" NCHAR(50),
    "SLPRCIDLN" NUMBER,
    "SLCCIDLN" NUMBER,
    "SLSHCCIDLN" NUMBER,
    "SLOPPID" NUMBER,
    "SLOSTP" NCHAR(3),
    "SLUKID" NUMBER,
    "SLCATNM" NCHAR(30),
    "SLALLOC" NCHAR(1),
    "SLFULPID" NUMBER(15,0),
    "SLALLSTS" NCHAR(30),
    "SLOSCORE" NUMBER,
    "SLOSCOREO" NCHAR(1),
    "SLCMCO" NCHAR(5),
    "SLKITID" NUMBER,
    "SLKITAMTDOM" NUMBER,
    "SLKITAMTFOR" NUMBER,
    "SLKITDIRTY" NCHAR(1),
    "SLOCITT" NCHAR(1),
    "SLOCCARDNO" NUMBER
       ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
COMPRESS FOR OLTP NOLOGGING
  STORAGE(INITIAL 4294967296 NEXT 209715200 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SSDMAX"
  PARALLEL ;
 


  CREATE VIEW TESTDTA.F42199_T3 as
  select t1.SLKCOO ,
t1.SLDOCO ,
t1.SLDCTO ,
t1.SLLNID ,
t1.SLSFXO ,
t1.SLMCU ,
t1.SLCO ,
t1.SLOKCO ,
t1.SLOORN ,
t1.SLOCTO ,
t1.SLOGNO ,
t1.SLRKCO ,
t1.SLRORN ,
t1.SLRCTO ,
t1.SLRLLN ,
t1.SLDMCT ,
t1.SLDMCS ,
t1.SLAN8 ,
t1.SLSHAN ,
t1.SLPA8 ,
t1.SLDRQJ ,
t1.SLTRDJ ,
t1.SLPDDJ ,
t1.SLADDJ ,
t1.SLIVD ,
t1.SLCNDJ ,
t1.SLDGL ,
t1.SLRSDJ ,
t1.SLPEFJ ,
t1.SLPPDJ ,
t1.SLVR01 ,
t1.SLVR02 ,
t1.SLITM ,
t1.SLLITM ,
t1.SLAITM ,
t1.SLLOCN ,
t1.SLLOTN ,
t1.SLFRGD ,
t1.SLTHGD ,
t1.SLFRMP ,
t1.SLTHRP ,
t1.SLEXDP ,
t1.SLDSC1 ,
t1.SLDSC2 ,
t1.SLLNTY ,
t1.SLNXTR ,
t1.SLLTTR ,
t1.SLEMCU ,
t1.SLRLIT ,
t1.SLKTLN ,
t1.SLCPNT ,
t1.SLRKIT ,
t1.SLKTP ,
t1.SLSRP1 ,
t1.SLSRP2 ,
t1.SLSRP3 ,
t1.SLSRP4 ,
t1.SLSRP5 ,
t1.SLPRP1 ,
t1.SLPRP2 ,
t1.SLPRP3 ,
t1.SLPRP4 ,
t1.SLPRP5 ,
t1.SLUOM ,
t1.SLUORG ,
t1.SLSOQS ,
t1.SLSOBK ,
t1.SLSOCN ,
t1.SLSONE ,
t1.SLUOPN ,
t1.SLQTYT ,
t1.SLQRLV ,
t1.SLCOMM ,
t1.SLOTQY ,
t1.SLUPRC ,
t1.SLAEXP ,
t1.SLAOPN ,
t1.SLPROV ,
t1.SLTPC ,
t1.SLAPUM ,
t1.SLLPRC ,
t1.SLUNCS ,
t1.SLECST ,
t1.SLCSTO ,
t1.SLTCST ,
t1.SLINMG ,
t1.SLPTC ,
t1.SLRYIN ,
t1.SLDTBS ,
t1.SLTRDC ,
t1.SLFUN2 ,
t1.SLASN ,
t1.SLPRGR ,
t1.SLCLVL ,
t1.SLCADC ,
t1.SLKCO ,
t1.SLDOC ,
t1.SLDCT ,
t1.SLODOC ,
t1.SLODCT ,
t1.SLOKC ,
t1.SLPSN ,
t1.SLDELN ,
t1.SLTAX1 ,
t1.SLTXA1 ,
t1.SLEXR1 ,
t1.SLATXT ,
t1.SLPRIO ,
t1.SLRESL ,
t1.SLBACK ,
t1.SLSBAL ,
t1.SLAPTS ,
t1.SLLOB ,
t1.SLEUSE ,
t1.SLDTYS ,
t1.SLNTR ,
t1.SLVEND ,
t1.SLCARS ,
t1.SLMOT ,
t1.SLROUT ,
t1.SLSTOP ,
t1.SLZON ,
t1.SLCNID ,
t1.SLFRTH ,
t1.SLSHCM ,
t1.SLSHCN ,
t1.SLSERN ,
t1.SLUOM1 ,
t1.SLPQOR ,
t1.SLUOM2 ,
t1.SLSQOR ,
t1.SLUOM4 ,
t1.SLITWT ,
t1.SLWTUM ,
t1.SLITVL ,
t1.SLVLUM ,
t1.SLRPRC ,
t1.SLORPR ,
t1.SLORP ,
t1.SLCMGP ,
t1.SLGLC ,
t1.SLCTRY ,
t1.SLFY ,
t1.SLSO01 ,
t1.SLSO02 ,
t1.SLSO03 ,
t1.SLSO04 ,
t1.SLSO05 ,
t1.SLSO06 ,
t1.SLSO07 ,
t1.SLSO08 ,
t1.SLSO09 ,
t1.SLSO10 ,
t1.SLSO11 ,
t1.SLSO12 ,
t1.SLSO13 ,
t1.SLSO14 ,
t1.SLSO15 ,
t1.SLACOM ,
t1.SLCMCG ,
t1.SLRCD ,
t1.SLGRWT ,
t1.SLGWUM ,
t1.SLSBL ,
t1.SLSBLT ,
t1.SLLCOD ,
t1.SLUPC1 ,
t1.SLUPC2 ,
t1.SLUPC3 ,
t1.SLSWMS ,
t1.SLUNCD ,
t1.SLCRMD ,
t1.SLCRCD ,
t1.SLCRR ,
t1.SLFPRC ,
t1.SLFUP ,
t1.SLFEA ,
t1.SLFUC ,
t1.SLFEC ,
t1.SLURCD ,
t1.SLURDT ,
t1.SLURAT ,
t1.SLURAB ,
t1.SLURRF ,
t1.SLTORG ,
t1.SLUSER ,
t1.SLPID ,
t1.SLJOBN ,
t1.SLUPMJ ,
t1.SLTDAY ,
t1.SLSO16 ,
t1.SLSO17 ,
t1.SLSO18 ,
t1.SLSO19 ,
t1.SLSO20 ,
t1.SLIR01 ,
t1.SLIR02 ,
t1.SLIR03 ,
t1.SLIR04 ,
t1.SLIR05 ,
t1.SLSOOR ,
t1.SLVR03 ,
t1.SLDEID ,
t1.SLPSIG ,
t1.SLRLNU ,
t1.SLPMDT ,
t1.SLRLTM ,
t1.SLRLDJ ,
t1.SLDRQT ,
t1.SLADTM ,
t1.SLOPTT ,
t1.SLPDTT ,
t1.SLPSTM ,
t1.SLXDCK ,
t2.SLXPTY ,
t2.SLDUAL ,
t2.SLBSC ,
t2.SLCBSC ,
t2.SLCORD ,
t2.SLDVAN ,
t2.SLPEND ,
t2.SLRFRV ,
t2.SLMCLN ,
t2.SLSHPN ,
t2.SLRSDT ,
t2.SLPRJM ,
t2.SLOSEQ ,
t2.SLMERL ,
t2.SLHOLD ,
t2.SLHDBU ,
t2.SLDMBU ,
t2.SLBCRC ,
t2.SLODLN ,
t2.SLOPDJ ,
t2.SLXKCO ,
t2.SLXORN ,
t2.SLXCTO ,
t2.SLXLLN ,
t2.SLXSFX ,
t2.SLPOE ,
t2.SLPMTO ,
t2.SLANBY ,
t2.SLPMTN ,
t2.SLNUMB ,
t2.SLAAID ,
t2.SLPRAN8 ,
t2.SLSPATTN ,
t2.SLPRCIDLN ,
t2.SLCCIDLN ,
t2.SLSHCCIDLN ,
t2.SLOPPID ,
t2.SLOSTP ,
t2.SLUKID ,
t2.SLCATNM ,
t2.SLALLOC ,
t2.SLFULPID ,
t2.SLALLSTS ,
t2.SLOSCORE ,
t2.SLOSCOREO ,
t2.SLCMCO ,
t2.SLKITID ,
t2.SLKITAMTDOM ,
t2.SLKITAMTFOR ,
t2.SLKITDIRTY ,
t2.SLOCITT ,
t2.SLOCCARDNO
FROM TESTDTA.F42199_T1 T1, TESTDTA.F42199_T2 t2
WHERE t1.SLKCOO = t2.SLKCOO
AND t1.SLDOCO = t2.SLDOCO
AND t1.SLDCTO = t2.SLDCTO
AND t1.SLUPMJ = t2.SLUPMJ
AND t1.SLTDAY = t2.SLTDAY
AND t1.SLLNID = t2.SLLNID ;

create unique index TESTDTA.F42199_T1PK  ON TESTDTA.f42199_T1 (SLKCOO,SLDOCO,SLDCTO,SLUPMJ,SLTDAY,SLLNID);
create unique index TESTDTA.F42199_T2PK  ON TESTDTA.f42199_T2 (SLKCOO,SLDOCO,SLDCTO,SLUPMJ,SLTDAY,SLLNID);

The create my other table, but I get the following when I do an insert:

SQL Error: ORA-01776: cannot modify more than one base table through a join view
01776. 00000 -  "cannot modify more than one base table through a join view"
*Cause:    Columns belonging to more than one underlying table were either
           inserted into or updated.
*Action:   Phrase the statement as two or more separate statements.

I thought that I could try a trigger, but I don’t care about that anymore.

So – it’s time to get creative.

I’ve going to select 10 columns where the data is blank and drop them from the table.  Create a new view over that table and select constants for the other values – JOB DONE!

Then I’ll compress the table and all will be good.

I hear you saying “You cannot do that, what if someone want’s to use one of those columns).  I say, 320GB and 80 million rows cannot be wrong!

select count(1) from testdta.f42199 where SLRLTM>0;
select count(1) from testdta.f42199 where SLRLDJ>0 ;
select count(1) from testdta.f42199 where SLDRQT>0 ;
select count(1) from testdta.f42199 where SLADTM>0 ;
select count(1) from testdta.f42199 where SLOPTT>0 ;
select count(1) from testdta.f42199 where SLPDTT>0 ;
select count(1) from testdta.f42199 where SLPSTM>0 ;
select count(1) from testdta.f42199 where SLXPTY>0 ;
select count(1) from testdta.f42199 where SLDEID>0 ;
select count(1) from testdta.f42199 where SLSOOR >0 ;

So I use the above to fine some fields that are blank for all 80 million records and will academically remove them from the table.

Create a view that just selects ‘’ or 0 and job done!

Viewing all 541 articles
Browse latest View live