Quantcast
Channel: Shannon's JD Edwards CNC Blog
Viewing all 541 articles
Browse latest View live

Good representation of the digital journey

$
0
0


I’ve been to a number of digital workshops, and it’d been hard to define what an individual digital journey is.  It’s a dilemma, how to articulate “going digital” or a digital transformation, as it’s always a very subjective journey.  I find that trying to put some structure around the definition and the journey that is relevant for certain companies is the best way.  The process below (as originally found in a Microsoft presentation) is a great start.

You can transform your business from the following business areas:image

So let’s at least start to categorise the elements of a digital journey, then build upon this.

  • empowering your employees
  • Engaging your customers
  • Transforming your products and services
  • Optimising your operations.

I think we can also cross reference the above with our knowledge of mega-trends and come up with a handy matrix of megatrend within digital category:


image

I think that this categorisation assists with identification of opportunities for innovation in your organisation.

Note that I’ve added a couple of my own for “good luck”, integration and configuration (citizen developer).

I could categorise most of the innovation at Fusion5 with the above.


Integration options for JD Edwards, is there one size fits all?

$
0
0

No there is not.

…  Should I end my post there, probably not…

I think that you should begin to consider a single solution for your integrations, as it’s going to become more and more popular.  If SaaS has anything to do with your digital journey – which it does, then you need to start considering orchestration.

Orchestration might not be the correct term, as it might be integration enablement – or at least a integration funnel.  What I mean here is that all systems have mechanisms for integration (JDE has heaps AIS, BSSV, UBE, XMLCallObject, COM…) this list goes on.  but they are a little proprietary by nature, in that yes – JDE will talk REST – but you have to form the document exactly as JD Edwards requires.  This is the same for web services, it takes some work.  Without an integration solution, this message transformation must happen at one end of the integration, i.e.

  • Modify JDE to take a generic “rest-ful” payload and then use logic in JDE to transform this into something that JDE can consume / produce and reply
  • Modify the other “point” system (as this would be point to point") to talk very specifically like JDE expects and have JDE reply natively…  Then interpret that native talk to your other point talk.

image

In the instance above I’m going to modify JD Edwards.  I’ll write a basic screen with a large text box that someone can call and plonk in a JSON document.  My code will rip this up and act upon it and then write the results in another text box that the calling system can read.  So we’ve opened up JD Edwards to talk fairly generically but you must communication to JDE in a fairly standard format (you don’t need to know all of the controls etc in AIS).  Or if you don’t want to make any mods to the WMS, then you need to write a BSSV or some generic code to listen for the WMS message and then act upon this.

So above we can see that we are writing quite a bit of code in potentially two solutions to get an integration going…  You notice that if you had a bunch of skills in the WMS, you might make it do the heavy lifting and then light touch code in JDE

What if there is then another system.

image

We do start to get complex.  We are trying to use JDE as the “funnel” and write the logic in JDE for all of the systems, but this is a lot of JD Edwards code that might not be too good at ripping apart XML and JSON.  This is where orchestration or integration comes into play.

What are your options?  Again, the gartner magic quadrant is going to help you create a shortlist:

image

Jitterbit

Oracle

Dell Boomi

Mulesoft

Biztalk

… and more

I’ve applied some logic to this, as these are the solutions that I want to speak about

What are these going to allow you to do?

You are going to be able to do your integration intensive work in a platform / environment that is designed to integrate.  You’ll be able to point and click integration items, apply translations, speak native WS and JSON.  You will be able to do native flat file, read CSV and talk to databases using JDBC – nice!!  All this within your integration solution – nicer.

image

This means that all of your logic for integrations is in one place.  All of your monitoring and maintenance is in one place.  Your integration SDLC should be more agile than your ERP SDLC, and this can also be managed in a single solution.  Are you ready to swap out any component in your enterprise – yes.

This is great, you are not modifying ERP code and WMS code, the integration suite it talking the default integration points from each solution and doing the mapping and coordination between the two systems.  You can keep the ERP, WMS and CRM standard and do the heavy lifting in integration.

So I do mention certain products, but I think that they’ll all get the job done.  Is one better than the other – probably at some things and not at others. 

As a client you need to decide on your price point and tech bias.  If you are a microsoft shop and want Azure only – biztalk in Azure is for you.

If you are an oracle shop then perhaps oracle’s integration cloud service is for you https://cloud.oracle.com/integration

I have a bunch of clients very happy using Jitterbit, which is a great solution for complex requirements.

What if you like to Develop?

My personal favorite is to use some generic “push|pull” cloud based solutions to solve this problem – why?  I have some amazing developers that can get this done easily.  I have the power of AWS and Lambda to communication securely and quickly and only pay for the instructions that I run.  I can have subscribers and queues that I can configure and own.  Infinitely extensible, highly available and disaster recoverable.  Sure, there is some dev – but I’m leaning on cloud based constructs that can span data availability zones and regions if necessary. 

I’m a little cloud agnostic, so I’d also look into the google pub/sub https://cloud.google.com/pubsub/docs/overview, which also has amazing extensibility and flexibility into what you can do and how you can do it.  Building a complete integration solution is NOT as hard as you may think.  If you take integration down to the lowest common denominator, pub/sub – push pull is all that you need.

What is the secret sauce?

Cloud…  You need to set up your integrations like it was SaaS.  Go through the learning and the pain now and reap the benefits when EVERY piece of software that your organisation uses is SaaS.  Establish your integration framework like is was “service” based – microservice if you like.  Then you can not only expose this to a integration, but also a mobile application and an active web page.  IF you are exposing a consistent end point, this can be modified behind the scenes without changing the interface to all of the external systems.

INTaaS

Of course!  I think that this is logic if you have a good partner (like fusion5) and you don’t really want to make all of the technical decisions.  You can just tell your service provider what needs to be connected and they do it.  They manage it, they maintain it.  Provide workflow and a single source of truth for integrations – this is a good model.  Allow you to get back to adding value to the decision making process knowing that the integrations are just working.

Bullet points to help you make your decision

  • cloud based integration solution
  • ensure you get monitoring and workflow if you want to own the solution and make all the technical decisions
  • ensure that your integration provider will give you this if you do not want to own it
  • everything will be SaaS / service eventually, get ready.
  • reuse your logic, think microservice
  • RTE is an awesome way to get information out of JDE into a queueing service…  Use it!
  • AIS is a must for lightweight integration into JD Edwards – it’s not just for mobile apps
  • Weigh up the cost of technical debt (writing ERP code / WMS code to support a integration) vs. using standard integration points and orchestration / integration solution…  You might find that the subscription costs are less than the technical debt you’ll incur quite quickly.

oracle index compression and JDE

$
0
0

Some indexes compress well and some do not, that is my position on index compression – which is about the same for tables.

There are two types of index compression:

  • Creating an Index Using Prefix Compression
    Creating an index using prefix compression (also known as key compression) eliminates repeated occurrences of key column prefix values. Prefix compression is most useful for non-unique indexes with a large number of duplicates on the leading columns.
  • Creating an Index Using Advanced Index Compression (only available in 12c)
    Creating an index using advanced index compression reduces the size of all supported unique and non-unique indexes. Advanced index compression improves the compression ratios significantly while still providing efficient access to the indexes. Therefore, advanced index compression works well on all supported indexes, including those indexes that are not good candidates for prefix compression.

Take a look at the F0911 data below:

select t1.segment_name, t1.owner, t1.segment_type, t1.tablespace_name, t1.bytes, t2.COMPRESSION
from dba_segments t1, all_indexes t2
where t1.owner in ('TESTDTA', 'ORADTA', 'CRPDTA')
and t2.index_name = t1.segment_name
and t2.owner = t1.owner
and t1.segment_name like 'F0911%'
order by t1.segment_name, t1.owner;

Provides:

Note that in my example CRPDTA and TESTDTA are both the same data and compressed.  ORADTA is an older copy and not compressed.

SEGMENT_NAME     OWNER        SEGMENT_TYPE       TABLESPACE_NAME    BYTES COMPRESS

F0911_10    CRPDTA    INDEX    CRPDTAI    9964158976    ENABLED
F0911_10    ORADTA    INDEX    ORADTAI    34437660672    DISABLED
F0911_10    TESTDTA    INDEX    SSDMAXI    9963700224    ENABLED

F0911_11    CRPDTA    INDEX    CRPDTAI    4324261888    ENABLED
F0911_11    ORADTA    INDEX    ORADTAI    15610150912    DISABLED
F0911_11    TESTDTA    INDEX    SSDMAXI    4312596480    ENABLED

F0911_12    CRPDTA    INDEX    CRPDTAI    4467720192    ENABLED
F0911_12    ORADTA    INDEX    ORADTAI    13323927552    DISABLED
F0911_12    TESTDTA    INDEX    SSDMAXI    4467130368    ENABLED

F0911_13    CRPDTA    INDEX    CRPDTAI    4342087680    ENABLED
F0911_13    ORADTA    INDEX    ORADTAI    13638959104    DISABLED
F0911_13    TESTDTA    INDEX    SSDMAXI    4298702848    ENABLED

F0911_15    CRPDTA    INDEX    CRPDTAI    17101357056    ENABLED
F0911_15    ORADTA    INDEX    ORADTAI    14177140736    DISABLED
F0911_15    TESTDTA    INDEX    SSDMAXI    17108828160    ENABLED


List of all F0911 indexes is below:

image


Interesting hey, some are really good and some are pretty junk at compression.

_10 reduces by about 350%, _11 by 350% – but look at 15 and the compressed version is actually larger (that is a little bit of a problem with my dodgy data).

Why is this so?

Looking at the definitions of the indexes:

F0911_10 = GLPOST, GLAID, GLLT, GLCTRY

This is made up of the following columns

image

CRPDTA    F0911_10    CRPDTA    F0911    GLPOST    1    2    1    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLAID    2    16    8    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLLT    3    4    2    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLCTRY    4    22    0    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLFY    5    22    0    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLPN    6    22    0    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLSBL    7    16    8    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLSBLT    8    2    1    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLDGJ    9    22    0    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLASID    10    50    25    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLBRE    11    2    1    ASC

I feel that this compresses well because of the similar prefix columns in other indexes…

Index F0911_15 is rubbish:

CRPDTA    F0911_15    CRPDTA    F0911    GLDCT    1    4    2    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLDOC    2    22    0    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLKCO    3    10    5    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLDGJ    4    22    0    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLLT    5    4    2    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLEXTL    6    4    2    ASC
CRPDTA    F0911_15    CRPDTA    F0911    SYS_NC00142$    7    34    0    DESC

So, because I’m using standard index compression, the database is unable to find another index with similar prefix’s as index 15, which means that it cannot get any benefits out of the prefix compression.  Okay that is good to know.

move TB from NZ to Aus via S3 bucket of course

$
0
0

I need to move a lot of data quickly, so I’m going to use an S3 bucks and install the aws cli on my linux hosts to be able to put and get.

Let’s begin the dance…

to use aws-cli, you need to install with pip.

to install pip https://pip.pypa.io//en/latest/installing/#get-pip-py-options you need to wget or at least copy a copy of get-pip.py for python > 2.6

remember that you need to set the proxy (more than likely on your server)

export http_proxy=http://moirs:Password\!@proxy:8080

Then test it

[root@ronin0 ~]# wget www.google.com
--2017-06-23 16:05:57--  http://www.google.com/
Resolving proxy... 10.241.10.79
Connecting to proxy|10.241.10.79|:8080... connected.
Proxy request sent, awaiting response... 302 Found
Location:
http://www.google.co.nz/?gfe_rd=cr&ei=pZNMWeKlFqHM8gfo7LDQCg [following]
--2017-06-23 16:05:57-- 
http://www.google.co.nz/?gfe_rd=cr&ei=pZNMWeKlFqHM8gfo7LDQCg
Connecting to proxy|10.241.10.79|:8080... connected.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `index.html.1'

    [ <=>                                                                                                                        ] 13,313      --.-K/s   in 0.001s

2017-06-23 16:05:57 (15.6 MB/s) - `index.html.1' saved [13313]

Tidy, proxy is good

Now you need to run the get-pip.py script that you’ve copied in

python get-pip.py --proxy="http://moirs:Password\!@proxy:8080"

DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6
Collecting pip
/tmp/tmpHeNyVB/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning.
/tmp/tmpHeNyVB/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning.
  Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB)
    100% |################################| 1.3MB 600kB/s
Collecting setuptools
  Downloading setuptools-36.0.1-py2.py3-none-any.whl (476kB)
    100% |################################| 481kB 1.4MB/s
Collecting wheel
  Downloading wheel-0.29.0-py2.py3-none-any.whl (66kB)
    100% |################################| 71kB 1.3MB/s
Collecting argparse; python_version == "2.6" (from wheel)
  Downloading argparse-1.4.0-py2.py3-none-any.whl
Installing collected packages: pip, setuptools, argparse, wheel
  Found existing installation: argparse 1.2.1
    Uninstalling argparse-1.2.1:
      Successfully uninstalled argparse-1.2.1
Successfully installed argparse-1.4.0 pip-9.0.1 setuptools-36.0.1 wheel-0.29.0
/tmp/tmpHeNyVB/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can

Great, now I can install aws-cli

[root@ronin0 ~]# pip --proxy http://moirs:Password\!@proxy:8080 install awscli
DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6
Collecting awscli
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning.
  SNIMissingWarning
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
  Downloading awscli-1.11.111-py2.py3-none-any.whl (1.2MB)
    100% |################################| 1.2MB 585kB/s
Collecting botocore==1.5.74 (from awscli)
  Downloading botocore-1.5.74-py2.py3-none-any.whl (3.5MB)
    100% |################################| 3.5MB 282kB/s
Collecting rsa<=3.5.0,>=3.1.2 (from awscli)
  Downloading rsa-3.4.2-py2.py3-none-any.whl (46kB)
    100% |################################| 51kB 5.1MB/s
Collecting s3transfer<0.2.0,>=0.1.9 (from awscli)
  Downloading s3transfer-0.1.10-py2.py3-none-any.whl (54kB)
    100% |################################| 61kB 82kB/s
Requirement already satisfied: argparse>=1.1; python_version == "2.6" in /usr/lib/python2.6/site-packages (from awscli)
Collecting docutils>=0.10 (from awscli)
  Downloading docutils-0.13.1-py2-none-any.whl (537kB)
    100% |################################| 542kB 1.3MB/s
Collecting colorama<=0.3.7,>=0.2.5 (from awscli)
  Downloading colorama-0.3.7-py2.py3-none-any.whl
Collecting PyYAML<=3.12,>=3.10 (from awscli)
  Downloading PyYAML-3.12.tar.gz (253kB)
    100% |################################| 256kB 1.9MB/s
Collecting simplejson==3.3.0; python_version == "2.6" (from botocore==1.5.74->awscli)
  Downloading simplejson-3.3.0.tar.gz (67kB)
    100% |################################| 71kB 4.0MB/s
Collecting ordereddict==1.1; python_version == "2.6" (from botocore==1.5.74->awscli)
  Downloading ordereddict-1.1.tar.gz
Collecting python-dateutil<3.0.0,>=2.1 (from botocore==1.5.74->awscli)
  Downloading python_dateutil-2.6.0-py2.py3-none-any.whl (194kB)
    100% |################################| 194kB 2.3MB/s
Collecting jmespath<1.0.0,>=0.7.1 (from botocore==1.5.74->awscli)
  Downloading jmespath-0.9.3-py2.py3-none-any.whl
Collecting pyasn1>=0.1.3 (from rsa<=3.5.0,>=3.1.2->awscli)
  Downloading pyasn1-0.2.3-py2.py3-none-any.whl (53kB)
    100% |################################| 61kB 4.8MB/s
Collecting futures<4.0.0,>=2.2.0; python_version == "2.6" or python_version == "2.7" (from s3transfer<0.2.0,>=0.1.9->awscli)
  Downloading futures-3.1.1-py2-none-any.whl
Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore==1.5.74->awscli)
  Downloading six-1.10.0-py2.py3-none-any.whl
Building wheels for collected packages: PyYAML, simplejson, ordereddict
  Running setup.py bdist_wheel for PyYAML ... done
  Stored in directory: /root/.cache/pip/wheels/2c/f7/79/13f3a12cd723892437c0cfbde1230ab4d82947ff7b3839a4fc
  Running setup.py bdist_wheel for simplejson ... done
  Stored in directory: /root/.cache/pip/wheels/5a/a5/b9/b0c89f0c5c40e2090601173e9b49091d41227c6377020e4e68
  Running setup.py bdist_wheel for ordereddict ... done
  Stored in directory: /root/.cache/pip/wheels/cf/2c/b5/a1bfd8848f7861c1588f1a2dfe88c11cf3ab5073ab7af08bc9
Successfully built PyYAML simplejson ordereddict
Installing collected packages: simplejson, ordereddict, six, python-dateutil, jmespath, docutils, botocore, pyasn1, rsa, futures, s3transfer, colorama, PyYAML, awscli
  Found existing installation: simplejson 2.0.9
    Uninstalling simplejson-2.0.9:
      Successfully uninstalled simplejson-2.0.9
  Found existing installation: ordereddict 1.2
    DEPRECATION: Uninstalling a distutils installed project (ordereddict) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
    Uninstalling ordereddict-1.2:
      Successfully uninstalled ordereddict-1.2
Successfully installed PyYAML-3.12 awscli-1.11.111 botocore-1.5.74 colorama-0.3.7 docutils-0.13.1 futures-3.1.1 jmespath-0.9.3 ordereddict-1.1 pyasn1-0.2.3 python-dateutil-2.6.0 rsa-3.4.2 s3transfer-0.1.10 simplejson-3.3.0 six-1.10.

Now,

Now we can run it, but remember https_proxt envrionment variable too

export https_proxy=http://moirs:Password\!@proxy:8080

aws configure

AWS Access Key ID [None]: GFDGHTRHBT
AWS Secret Access Key [None]: GFDHGF
Default region name [None]: ap-southeast-2
Default output format [None]:

Note that you get your secret key and access ID from the AWS console for your username

you are cooking with gas…

aws s3 ls

Building an ODA for JDE

$
0
0

I’ve been back on the tools “big time”, building a JD Edwards environment on an ODA – a new FLASH based X6-2HA.  This is pretty exciting stuff (for a nerd like me).

Something interesting when installing VMs – which has been a little painful

you need to run

[root@sodax6-1 testing2]# oakcli import vmtemplate OL7U3 -assembly /OVS/Repositories/testing2/OVM_OL7U3_x86_64_PVHVM.ova -repo testing2 -node 0

Imported VM Template

This is being run from ODA_BASE, but you are specifying the location on DOM0 – WHAT???  Stupid hey?

You are on ODA_BASE and you do df –k:

[root@sodax6-1 testing2]# df -k
Filesystem            1K-blocks       Used  Available Use% Mounted on
/dev/xvda2             57191708   12601536   41684932  24% /
tmpfs                 132203976    1246296  130957680   1% /dev/shm
/dev/xvda1               471012      35731     410961   8% /boot
/dev/xvdb1             96119564   33014300   58222576  37% /u01
/dev/asm/acfsvol-49    52428800     194884   52233916   1% /cloudfs
/dev/asm/testing-216 1048576000  298412232  750163768  29% /u01/app/sharedrepo/testing
/dev/asm/testing2-216
                      4194304000 1454789000 2739515000  35% /u01/app/sharedrepo/testing2
/dev/asm/datastore-344
                        32505856   17050960   15454896  53% /u01/app/oracle/oradata/datastore
/dev/asm/datastore-216
                      4393533440 4240976672  152556768  97% /u02/app/oracle/oradata/datastore
/dev/asm/datastore-49
                       530579456  326606008  203973448  62% /u01/app/oracle/fast_recovery_area/datastore

but, from dom0

[root@sodax6-1dom0 testing2]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda3             19840924   3543148  15273636  19% /
/dev/sda2            428805744 192371988 214300204  48% /OVS
/dev/sda1               497829     44970    427157  10% /boot
tmpfs                  1233052         0   1233052   0% /dev/shm
none                   1233052       112   1232940   1% /var/lib/xenstored
192.168.18.21:/u01/app/sharedrepo/testing
                      1048576000 298412224 750163776  29% /OVS/Repositories/testing
192.168.18.21:/u01/app/sharedrepo/testing2
                      4194304000 1454788992 2739515008  35% /OVS/Repositories/testing2

Make sure that you reference your template like it’s “hung” off the DOM0 not ODA_BASE

See below, this is stealing a copy of the “JDE in a box” system disk from the JD Edwards templates.  Note that it’s not stealing, but you can get an older version of the OS this way.  note also that it needs to be a compressed tar ball to work.

[root@sodax6-1 testing2]# oakcli import vmtemplate EL58 -files /OVS/Repositories/testing2/e1_X86_sys_914.tgz -repo testing2 -node 0

A special note:

I used AWS S3 buckets as a temp location (instead of a VPN because of speed and configuration problems for the VPN).  I was able to get 50MB/sec download from the S3 bucket into the oracle data centre in Sydney –wow!  that is very impressive.

JDE slow, missing indexes? find it fast… fix it fast!

$
0
0

Here is a basic SQL that will tell you if you are missing any indexes (PK or other) for oracle based upon your current central objects.

Note that there is a difference in the naming of the unique index (_PK), so the large union.

select trim(tpobnm) || '_' || tpinid  as jdeindex
from py900.f98712 
where tpuniq <> 1
and not exists
(select 1 from all_indexes
where owner = 'CRPDTA'
and trim(tpobnm) || '_' || tpinid = index_name)
and exists
(select 1
from all_Tables
where owner = 'CRPDTA'
and table_name = trim(tpobnm))
union
select trim(tpobnm) || '_PK'  as jdeindex
from py900.f98712 
where tpuniq = 1
and not exists
(select 1 from all_indexes
where owner = 'CRPDTA'
and trim(tpobnm) || '_PK' = index_name)
and exists
(select 1
from all_Tables
where owner = 'CRPDTA'
and table_name = trim(tpobnm))
order by 1 desc;

The results will tell you quickly what you are missing.  This is a nice quick sanity check.

I can admit that this works…  For me, I see the following in the results:

F00151_PK
F0006_7
F0006_5
F0006_4
F0006_2
F00021_PK

So then I look in sqldeveloper and see

image

Then I look in TDA and find:

image

Nice one SQL – missing indexes.

I grab the definitions and create them.  Note that I’m using compression too.


CREATE INDEX "CRPDTA"."F0006_2" ON "CRPDTA"."F0006" ("MCCO")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

CREATE INDEX "CRPDTA"."F0006_4" ON "CRPDTA"."F0006" ("MCSTYL", "MCCO")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

CREATE INDEX "CRPDTA"."F0006_5" ON "CRPDTA"."F0006" ("MCSTYL", "MCFMOD")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

CREATE INDEX "CRPDTA"."F0006_6" ON "CRPDTA"."F0006" ("MCAN8" DESC)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

CREATE INDEX "CRPDTA"."F0006_7" ON "CRPDTA"."F0006" ("MCCLNU", "MCPCTN", "MCDOCO")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

And run the SQL again:

No more F0006 – you’d think that it works!

Why did I do this?

Take a look at my IO!  12GBs of physical reads a second…  WHAT!  I needed to track this down fast.  I managed to find the problematic SQL and then also noticed that an index was missing on the F42119…  Okay, not a problem.  But I wanted to make sure that there were no other missing indexes.

image

AWR your UBE. Performance investigation with JD Edwards (JDE) oracle database and UBEs

$
0
0

This is pretty cool, if I do say so myself.

Have you ever wanted a little more oracle performance information out of your UBEs?  I know logging is good, but AWR is better for performance.

Here are some basics for you (but this could be taken to a whole other level!).

Firstly, the basics of AWR:

create a snaphot, create another one and then create a report based upon the first and second snaphots – RAD!

create

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;

run UBE

runube JDE JDE PY900 *ALL $1   $2     QBATCH     Interactive Hold Save

create

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;

Let’s put that into a simple korn shell script:

if [ $# -ne 2 ]
  then
   echo "USAGE: $0 REPORT VERSION"
   exit
fi
sqlplus JDE/JDE@JDETEST <<EOF
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
select max(snap_id)
from
    dba_hist_snapshot ;
quit;
EOF
time runube JDE JDE PY900 *ALL $1   $2     QBATCH     Interactive Hold Save
sqlplus JDE/JDE@JDETEST <<EOF
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
select max(snap_id)
from
    dba_hist_snapshot;
quit;
EOF


I like to start simple, then get complex.

This is cool.  It’ll snap and tell you the ID, it’ll run the job and tell you how long it took to run and then it will snap again and tell you the next id – coolio

But I want more and neater.

So now:

if [ $# -ne 2 ]
  then
   echo "USAGE: $0 REPORT VERSION"
   exit
fi
sqlplus JDE/JDE@JDETEST <<EOF
set feedback off
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
select 'START##' || max(snap_id)
from
    dba_hist_snapshot ;
quit;
EOF
echo $1_$2 RUNNING##
time runube JDE JDE PY900 *ALL $1   $2     QBATCH     Interactive Hold Save
sqlplus JDE/JDE@JDETEST <<EOF
set feedback off
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
select 'FINISH##' || max(snap_id)
from
    dba_hist_snapshot;
quit;
EOF

This is a little easier – because the output looks like:

you call it like this:

./runme.ksh R30812 F4109 >> R30812_F4109.out 2>&1

and the output looks like:

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 30 14:16:17 2017

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> SQL> SQL>   2    3
'START##'||MAX(SNAP_ID)
-----------------------------------------------
START##146
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
R047001A_S0001 RUNNING##
Using User/Password information on the command line has been deprecated.  Although it will continue to work in this release, it will no longer be available as an option in a future release.  Please switch to using one of the -p -f or -d options instead.

New Usage: runube       <[-p|-P] [-f|-F|-d|-D FilePath] [user password]>
                        <Environment>
                        <Role>
                        <ReportName>
                        <VersionName>
                        <JobQueue>
                        <"Interactive"|"Batch">
                        <"Print"|"Hold">
                        <"Save"|"Delete">
                        [Printer]
        -p|-P                   Prompts for user/password information

        -f|-f FilePath          Reads user/password information from the plain text file that is specified in FilePath.

        -d|-D FilePath          Reads user/password information from the plain text file, and indicates the automatic removal of the file.


real    0m19.343s
user    0m8.120s
sys     0m0.242s

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 30 14:16:38 2017

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> SQL> SQL>   2    3
'FINISH##'||MAX(SNAP_ID)
------------------------------------------------
FINISH##147
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options


You just need to awk through the output to find ## and grep –v ‘FINISH## and you’ll feed those results into the next script.

Right, but I want to parameterize this, I don’t want to enter in the snaps.

SELECT
    output
FROM    TABLE(dbms_workload_repository.awr_report_text ((select dbid from v$database),(select instance_number from v$instance),112,113 ));

So then I can automate the output too!

./runme.ksh R0010P XJDE0001 >> R0010P_XJDE0001.out 2>&1

The following bad boy will create the AWR report for you.

if [ $# -ne 2 ]
   then
     echo "USAGE FROMSNAP TOSNAP"
     exit
fi

sqlplus JDE/JDE@JDETEST <<EOF
SELECT
     output
  FROM    TABLE(dbms_workload_repository.awr_report_text ((select dbid from v\$database),(select instance_number from v\$instance),$1,$2 ));
quit;
EOF

Putting it all together, you can now grep though the output of your script.

extra for experts

If you wanted to be totally RAD, you could actually create a script and call it runube (but make it a korn shell script).  Basically it would call your stuff and then the actual runube later.  You could put all of the AWR magic in there also, so that you could have AWR’s for all of your reports.  Note that it might be a bit messy because of other transactions, but you’d get the hint about the performance and you’d know which snaps to use for what reports.

Forget missing indexes, have you thought about UNUSABLE indexes

$
0
0

So I did all that good work in https://shannonscncjdeblog.blogspot.com.au/2017/06/jde-slow-missing-indexes-find-it-fast.html but I find a bunch of my jobs are still not using indexes.  When I look into this some more, I see that the indexes are UNUSABLE!

Wow, deeper into the rabbit hole.

Use the following SQL to find indexes that are not valid.

select * from all_indexes where status <> 'VALID';

Then use the following to generate the statements to fix them.

select 'ALTER INDEX CRPDTA.' || index_name || ' REBUILD TABLESPACE CRPINDEX2;' FROM ALL_INDEXES where status = 'UNUSABLE';

And run the results, easy!

ALTER INDEX CRPDTA.F07351T_PK REBUILD TABLESPACE CRPINDEX2;
ALTER INDEX CRPDTA.F07351_PK REBUILD TABLESPACE CRPINDEX2;
ALTER INDEX CRPDTA.F07350_PK REBUILD TABLESPACE CRPINDEX2;
ALTER INDEX CRPDTA.F07315_PK REBUILD TABLESPACE CRPINDEX2;
ALTER INDEX CRPDTA.F073111_PK REBUILD TABLESPACE CRPINDEX2;


Ever wanted to shrink a datafile?

$
0
0

Are you like me and you might get a little bit too aggressive on space creation – ask for 4TB not 1…

Anyway, reality might mean that you need to scale things back, so here is some handy commands to do that:

see the size of the data files:

SELECT name, bytes/1024/1024 AS size_mb

FROM v$datafile;

Shrink datafile –

ALTER DATABASE DATAFILE '/u02/app/oracle/oradata/datastore/CRDTAI.dbf' RESIZE 1024G;

ALTER DATABASE DATAFILE '/u02/app/oracle/oradata/datastore/CRPDTAT.dbf' RESIZE 800G;

It’s that simple (well it was for me).

JDE UBE automatic AWR for jobs

$
0
0

This is easy and cool and you could do a lot more with it.  You’ll understand what I mean when I’m done.

When tracking down performance problems wouldn’t it be nice to see all of the tracing behind the scenes. 

You no not need stats pack it seems to get this cool information:

"Gathering database statistics using the AWR is enabled by default and is controlled by the STATISTICS_LEVEL initialization parameter. The STATISTICS_LEVEL parameter should be set to the TYPICAL or ALL to enable statistics gathering by the AWR. The default setting is TYPICAL. Setting STATISTICS_LEVEL to BASIC disables many Oracle Database features, including the AWR, and is not recommended." Thanks Tom, one of the few articles I understood https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9522853800346871377 

Great, standard edition is going to be fine too.

Here is the script that you’ll need, it does the following:

  1. create AWR snapshot (begin)
  2. run UBE, with unix time also
  3. create AWR snapshot (end)
  4. run AWR report based upon the two snapshots
  5. aws s3 cp the AWR to an AWS S3 bucket for review
  6. used a generated link in excel to point your results (summary of F986110|F986114) to the html file in the bucket.

Holy moly!

The script:

if [ $# -lt 2 ]
  then
   echo "USAGE: $0 REPORT VERSION"
   exit
fi
echo "set feedback off"> sql$$.sql
echo "EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;">> sql$$.sql
echo "select 'START##' || max(snap_id) from dba_hist_snapshot ;">> sql$$.sql
echo "quit;">> sql$$.sql
STARTSNAP=`sqlplus JDE/JDE@JDETEST @sql$$.sql |grep \#\# |grep -v \'|grep START|awk -F\# '{print $3}'`
echo $1_$2 RUNNING##
time runube JDE JDE PY900 *ALL $1   $2     QBATCH     Interactive Hold Save 2>/dev/null
ENDSNAP=`sqlplus JDE/JDE@JDETEST @sql$$.sql |grep \#\# |grep -v \'|grep START|awk -F\# '{print $3}'`
#AWR time  <--  This is a comment – how RAD!
echo "set linesize 8000">sql$$.sql
echo "set feedback off;">>sql$$.sql
echo "set heading off;">>sql$$.sql
echo "set verify off;">>sql$$.sql
echo "SELECT output FROM    TABLE(dbms_workload_repository.awr_report_html ((select dbid from v\$database),(select instance_number from v\$instance),$STARTSNAP,$ENDSNAP,8 ));">> sql$$.sql
echo "quit;">> sql$$.sql
cat sql$$.sql
sqlplus JDE/JDE@JDETEST @sql$$.sql > $1_$2_AWR.html
rm -f ./sql$$.sql
aws s3 cp $1_$2_AWR.html s3://mybucketofawr/$1_$2_AWR.html

The script is run as the jde user that runs the services.  You need to ensure that it can connect to the relevant database that JDE connects to.  I could put some #defines / exports, but you get the picture.

So, if I run this at the command line:

./runube.ksh R0010P XJDE0001

It does everything for me and creates an AWR HTML file in my S3 bucket.

[jde900@bear AWR]$ ./runubedemo.ksh R0010P XJDE0001
R0010P_XJDE0001 RUNNING##

real 0m1.744s
user 0m0.322s
sys 0m0.182s
set linesize 8000
set feedback off;
set heading off;
set verify off;
SELECT output FROM    TABLE(dbms_workload_repository.awr_report_html ((select dbid from v$database),(select instance_number from v$instance),1956,1957,8 ));
quit;
upload: ./R0010P_XJDE0001_AWR.html to s3://mybucketofawr/R0010P_XJDE0001_AWR.html

Note that this is nice, shows you if the job was CPU intensive too (at the logic tier). 

  • Real: this is the wall clock time. If other processes are running at the same time, they will slow down your process and thus will increase "real".
  • User: the time that CPU spent on your program in user mode.( Kernel mode does not counted in this. For example you requested a disk IO and your disk is very slow. Such a system call is invoked on kernel mode, therefore it will not be reflected to "user".)
  • Sys: the time that CPU spent on kernel mode during the execution. Kernel mode contains operations like disk IO, network IO, devices, memory allocation etc. (Part of the mem. allocation is still in user space, though.)

Then you get to goto https://s3-ap-southeast-2.amazonaws.com/mybucketofawr/R0010P_XJDE0001_AWR.html and you can see the actual report!

So, I create a simple spreadsheet based upon the output of the following

SELECT JCPID as INNERPID, JCVERS as INNERVERS, rtrim(JCPID) || rtrim(JCVERS) as CONCAT_PID_VERS, vrjd,
   count(1) as INNERCOUNT,
   Avg(86400*(JCETDTIM-JCSTDTIM)) as INNERAVERAGE,
   min(86400*(JCETDTIM-JCSTDTIM)) AS INNERMIN,
   max(86400*(JCETDTIM-JCSTDTIM)) AS INNERMAX,
   avg(jcpwprcd) as "ROWS PROCESSED"
from svm900.f986114, py900.f983051
where  trim(jcvers) = trim (vrvers) and trim(jcpid) = trim (vrpid)
and (JCETDTIM + interval '13' hour) < TO_DATE('14012018','DDMMYYYY') and (JCETDTIM + interval '13' hour) >= TO_DATE('18062017', 'DDMMYYYY')
group by jcpid, JCVERS, vrjd ;



image

Great!  so you could now write a custom exit (or – WOW an CAFE1 page) that would link to the AWR automatically.  That’s a nice solution to see the performance stats of your UBEs.

It would also be easy to put this into a “OSA” to make it automatic for mapped UBE’s

JD Edwards Test Monitoring

$
0
0

Ever wondered how much testing is actually being done?  Would you like to know what applications are being test and who is testing them?  Continuous delivery forces us to know more about our users and our modifications.

We have an automated service which will send a testing summary on a daily or weekly basis, letting you know who has logged in, what applications have been run and how long the “testing engagement” was running.  We can compare this to production and let you know what has been missed.  Sounds good?  Get in contact!  

You need this information to give your users focus in their testing and give people confidence that everything has been tested.

https://www.fusion5.co.nz/solutions/enterprise-resource-planning/jd-edwards/erp-analytics/

See modern dashboards with heat maps of user activity and actual current usage.

image

See your Testing usage v/s production

image

Drill down to see the users that are logging in and the applications that are being run.

rapid data selection entry the easy way–thanks James!

$
0
0

Have you ever lamented entering in loads of items into data selection?  Had them in a spreadsheet and thought this should be easier! 

Ever wanted to cut and paste from a spreadsheet directly into data selection?  Well, today might be your day.

in your browser Goto chrome://extensions (yes – this is chrome only)

clip_image001

Download this file:

https://s3-ap-southeast-2.amazonaws.com/jdeexpansion/jde-xpansion.crx

drag the file to the extensions browser page

clip_image002

clip_image004

What does it do?

clip_image006

2 cool things

You can cut and paste data selection – totally RAD

You can identify object identifications

clip_image008  (shift and left click will copy the id to the clipboard)

In the instance above, 54

Now the big one, ever needed to cut and paste 100 items into data selection?

No more.

Grab your column from a spreadsheet, and just paste

clip_image009

See there is two new controls, somewhere to paste data selection and somewhere to trigger the action.

Just paste your values in from a spreadsheet:

clip_image010

Then hit Add!

Wow, that might have just saved you a lot of time.

All thanks to James for putting this neat plugin together.

Now that you know how this works, please let us know any other enhancements or productivity gains that we could put into the plugin.

https://edelivery.oracle.com and wget

$
0
0

This is a cool enhancement / feature that I noticed the other day.

When downloading software from edelivery, I see:

image

the wget option at the end.

You can choose this and download a script:

image

You can cut and paste the script to your linux machine, and if the proxy is set up right, you can do the gets.

This has multiple advantages – but primarily if you do not have a graphical interface, you can do all of your downloading.


#!/bin/sh

set -x

#
# Generated onTue Jul 04 14:19:56 PDT 2017# Start of user configurable variables
#
LANG=C
export LANG

# SSO username and password
read -p 'SSO User Name:' SSO_USERNAME
read -sp 'SSO Password:' SSO_PASSWORD


# Path to wget command
WGET=/usr/bin/wget
# Location of cookie file
COOKIE_FILE=/tmp/$$.cookies

# Log directory and file
LOGDIR=.
LOGFILE=$LOGDIR/wgetlog-`date +%m-%d-%y-%H:%M`.log
# Output directory and file
OUTPUT_DIR=.
#
# End of user configurable variable
#

if [ "$SSO_PASSWORD " = "" ]
then
echo "Please edit script and set SSO_PASSWORD"
exit
fi

# Contact osdc site so that we can get SSO Params for logging in
SSO_RESPONSE=`$WGET --user-agent="Mozilla/5.0" --no-check-certificate https://edelivery.oracle.com/osdc/faces/SearchSoftware 2>&1|grep Location`

# Extract request parameters for SSO
SSO_TOKEN=`echo $SSO_RESPONSE| cut -d '=' -f 2|cut -d '' -f 1`
SSO_SERVER=`echo $SSO_RESPONSE| cut -d '' -f 2|cut -d '/' -f 1,2,3`
SSO_AUTH_URL=/sso/auth
AUTH_DATA="ssousername=$SSO_USERNAME&password=$SSO_PASSWORD&site2pstoretoken=$SSO_TOKEN"

# The following command to authenticate uses HTTPS. This will work only if the wget in the environment
# where this script will be executed was compiled with OpenSSL. Remove the --secure-protocol option
# if wget was not compiled with OpenSSL
# Depending on the preference, the other options are --secure-protocol= auto|SSLv2|SSLv3|TLSv1
$WGET --user-agent="Mozilla/5.0" --secure-protocol=auto --post-data $AUTH_DATA --save-cookies=$COOKIE_FILE --keep-session-cookies $SSO_SERVER$SSO_AUTH_URL -O sso.out >> $LOGFILE 2>&1

rm -f sso.out



  $WGET  --user-agent="Mozilla/5.0" --no-check-certificate --load-cookies=$COOKIE_FILE --save-cookies=$COOKIE_FILE --keep-session-cookies "https://edelivery.oracle.com/osdc/download?fileName=V43852-01.zip&token=b0ZNSVUrOU45MFhWb1VZd1Z2NHcrQSE6OiF1c2VybmFtZT1FUEQtU0hBTk5PTi5NT0lSQE1ZUklBRC1JVC5DT00mdXNlcklkPTE4MTI0NTkmY2FsbGVyPVNlYXJjaFNvZnR3YXJlJmNvdW50cnlJZD1BVSZlbWFpbEFkZHJlc3M9c2hhbm5vbi5tb2lyQG15cmlhZC1pdC5jb20mZmlsZUlkPTcwNDE5ODEzJmFydT0xNzM1MjA4OSZhZ3JlZW1lbn10cnVl" -O $OUTPUT_DIR/V43852-01.zip >> $LOGFILE 2>&1


  $WGET  --user-agent="Mozilla/5.0" --no-check-certificate --load-cookies=$COOKIE_FILE --save-cookies=$COOKIE_FILE --keep-session-cookies "https://edelivery.oracle.com/osdc/download?fileName=V43853-01.zip&token=ejlqREVLRzV0R0pQeUZKNGlWYU56ZyE6OiF1c2VybmFtZT1FUEQtU0hBTk5PTi5NT0lSQE1ZUklBRC1JVC5DT00mdXNlcklkPTE4MTI0NTkmY2FsbGVyPVNlYXJjaFNvZnR3YXJlJmNvdW50cnlJZD1BVSZlbWFpbEFkZHJlc3M9c2hhbm5vbi5tb2lyQG15cmlhZC1pdC5jb20mZmlsZUlkPTcwNDE5ODEyJmFydT0xNzM1MjA5MCZhZ3JlZW1lbnRJZD0zMzkxNDg4JnNvZnR3YXJlQ2lkcz0mcGxhdGZvcm1DaWRzPTYwJnByb2ZpbGVJbnN0YW5jZUNpZD0tOTk5OSZkb3dubG9hZFNvdXJjZT13Z2V00cnVl" -O $OUTPUT_DIR/V43853-01.zip >> $LOGFILE 2>&1


In this instance I was trying to download a couple of files.

You can also see that I’ve added set –x to my script as I needed to debug some proxy settings, this is a good option, as the script does not have a lot of output if things are going wrong.

Thanks oracle, this is a nice feature!

A CNC approach to oracle archive logs getting filled

$
0
0

Truncate them!  obvious…

No, kidding…  We do a lot of work with temp environments and large statements and sometimes that can cause various problems with filling archive logs…  I was getting issues when importing large tables using impdp.  Of course, I do not care about the archive.

Errors like this:

UDI-00257: operation generated ORACLE error 257

login as oracle to the database server.  And delete all archive up to the current time – 1/24 (I did rip this off the net – sorry I missed the credit).

. oraenv

rman

connect target

delete noprompt archivelog all completed before 'sysdate - 1/24';

quit

It’ll plow through the archive and let your statements run again

Database not starting

$
0
0

First thing is to look into the trace dir / logs dir for your database, it’ll look something like:

/u01/app/oracle/diag/rdbms/jdetest/JDETEST1/trace

I find that alert_JDETEST1.log (this is for RAC) is the best place to start, goto the bottom:

ARC0: STARTING ARCH PROCESSES COMPLETE

Errors in file /u01/app/oracle/diag/rdbms/jdetest/JDETEST1/trace/JDETEST1_ora_46405.trc:

ORA-19815: WARNING: db_recovery_file_dest_size of 499289948160 bytes is 100.00% used, and has 0 remaining bytes available.

************************************************************************

You have following choices to free up space from recovery area:

1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,

then consider changing RMAN ARCHIVELOG DELETION POLICY.

2. Back up files to tertiary device such as tape using RMAN

BACKUP RECOVERY AREA command.

3. Add disk space and increase db_recovery_file_dest_size parameter to

reflect the new space.

4. Delete unnecessary files using RMAN DELETE command. If an operating

system command was used to delete files, then use RMAN CROSSCHECK and

DELETE EXPIRED commands.

************************************************************************

Cool, the database have given me all of these good ideas!

But, I cannot start the database – so I cannot run RMAN. 

Chicken or the egg?

So I can see that I’m using all 465G of recovery area.  I need to extend this to be able to start the database properly so that RMAN will work.  So I update the size allocated (as I still have space available on the device) with the commands below

sqlplus / as sysdba

SQL>show parameter db_recovery_file_dest_size

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

db_recovery_file_dest_size big integer 465G

SQL> alter system set db_recovery_file_dest_size=500G ;

I forget, you might need to startup nomount at the SQL command on one of the RAC nodes (the one that you are on)

Then shutdown the database and start it normally (note that I’m only starting a single instance for the time being – not the RAC instance).

then

>rman / target

RMAN> delete noprompt archivelog all completed before 'sysdate - 1/24';

RMAN> quit

Nice, for a non DBA, I have my database back up and running.


ODA goodness–has this thing started to win me over

$
0
0

If you know about ODA’s, you probably know why I like the X6 about 100000 times more than anything before it – it all comes down to IOPS.  If you want more than 1500 IOPs consistently – then you might want to move on from the X5 if you have a very large database.  The X5 does have some cool stuff to mitigate it (it being the lack of IOPS), but at the end of the day there is limited FLASH to get that slow SAS data closer to the CPU.

But, the X6 is very fast and very nice and very FLASH

One thing I needed to do is quickly test 12c database version, so this can be done with a 1 click [need to be honest here, there is NO graphical interface native on the ODA, you need to start getting very familiar with oakcli commands.  Although this has escalated my confidence, I’ve started writing ksh scripts and automating everything I need on this machine.

Take a look at the above, 1 oakcli command and we are upgrading to 12C, both RAC nodes – everything.

That is cool!  (PS. I know that I can also do this is AWS RDS – and that is a click – so I guess this is just okay)…

There is no progress indicator, just “It will take a few minutes”.

A little “extra for experts”, do not modify the .bash_profile for oracle on the oda_base.  I had it prompting me for what oracle home I wanted and this was breaking a bunch of commands – what a dope I am.

I might make another post in 1 hour when this has broken and I’m picking up the pieces…

EM express for 12c

$
0
0

This is cool, no more emctl start dbconsole

I went snooping around for emctl and did not find one under 12c

google and found this gem:  http://www.oracle.com/technetwork/database/manageability/emx-intro-1965965.html#A1 This is probably all you need, but I needed more

When I followed the steps, my browsers got security errors.  Interestingly I only had a port come up with https, not http:

image

Secure Connection Failed

The connection to sodax6-1.oda.aus.osc:5500 was interrupted while the page was loading.

    The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
     Please contact the website owners to inform them of this problem.


I checked the ports that were open and found that http was not.


SQL> select dbms_xdb.getHttpPort() from dual;

GETHTTPPORT
-----------
       0

SQL> select dbms_xdb_config.getHttpsPort() from dual;


GETHTTPSPORT
------------
        5500


So I ran the below:

exec DBMS_XDB_CONFIG.SETHTTPPORT(5500);

Then was able to login

image

want to know more about ASM on the ODA?

$
0
0

Here are a couple of handy commands, especially if you are on an ODA

As root, you can see what space is being used by what database:

[root@sodax6-1 datastore]# oakcli show dbstorage

All the DBs with DB TYPE as non-CDB share the same volumes

DB_NAMES           DB_TYPE    Filesystem                                        Size     Used    Available    AutoExtend Size  DiskGroup
-------            -------    ------------                                    ------    -----    ---------   ----------------   --------
JDEPROD, JDETEST   non-CDB    /u01/app/oracle/oradata/datastore                   31G    16.26G      14.74G              3G        REDO
                               /u02/app/oracle/oradata/datastore                 4496G  4346.01G     149.99G            102G        DATA
                               /u01/app/oracle/fast_recovery_area/datastore      1370G   761.84G     608.16G             36G        RECO

Of course, this is what ACFS thinks:

[grid@sodax6-1 ~]$ df -k
Filesystem            1K-blocks       Used  Available Use% Mounted on
/dev/xvda2             57191708   14193400   40093068  27% /
tmpfs                 264586120    1246300  263339820   1% /dev/shm
/dev/xvda1               471012      35731     410961   8% /boot
/dev/xvdb1             96119564   50087440   41149436  55% /u01
/dev/asm/testing-216 1048576000  601013732  447562268  58% /u01/app/sharedrepo/testing
/dev/asm/datastore-344
                        32505856   17050504   15455352  53% /u01/app/oracle/oradata/datastore
/dev/asm/acfsvol-49    52428800     194884   52233916   1% /cloudfs
/dev/asm/datastore-49
                      1436549120  798850156  637698964  56% /u01/app/oracle/fast_recovery_area/datastore
/dev/asm/testing2-216
                      4194304000 1575520568 2618783432  38% /u01/app/sharedrepo/testing2
/dev/asm/datastore-216
                      4714397696 4661989408   52408288  99% /u02/app/oracle/oradata/datastore


Now, you might want to take a look at what ASM thinks about this

[grid@sodax6-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  4194304  19660800   198252           983040         -392394              0             Y  DATA/
MOUNTED  NORMAL  N         512   4096  4194304   3230720   321792           161536           80128              0             N  RECO/
MOUNTED  HIGH    N         512   4096  4194304    762880   667144           381440           95234              0             N  REDO/


A bit more detial thanks:


[grid@sodax6-1 ~]$ asmcmd volinfo -G DATA -a
Diskgroup Name: DATA

     Volume Name: DATASTORE
      Volume Device: /dev/asm/datastore-216
      State: ENABLED
      Size (MB): 4603904
      Resize Unit (MB): 64
      Redundancy: MIRROR
      Stripe Columns: 8
      Stripe Width (K): 1024
      Usage: ACFS
      Mountpath: /u02/app/oracle/oradata/datastore
 
      Volume Name: TESTING
      Volume Device: /dev/asm/testing-216
      State: ENABLED
      Size (MB): 1024000
      Resize Unit (MB): 64
      Redundancy: MIRROR
      Stripe Columns: 8
      Stripe Width (K): 1024
      Usage: ACFS
      Mountpath: /u01/app/sharedrepo/testing
 
      Volume Name: TESTING2
      Volume Device: /dev/asm/testing2-216
      State: ENABLED
      Size (MB): 4096000
      Resize Unit (MB): 64
      Redundancy: MIRROR
      Stripe Columns: 8
      Stripe Width (K): 1024
      Usage: ACFS
      Mountpath: /u01/app/sharedrepo/testing2

So now, I want to resize, as I’ve made my repo TESTING2 too big and I need some more space in my DATASTORE – so…

[grid@sodax6-1 ~]$ acfsutil size -1T /u01/app/sharedrepo/testing2
acfsutil size: new file system size: 3195455668224 (3047424MB)

and you can see that ACFS actually uses the “Auto-resize increment” to add to the FS when it’s low:

DB_NAMES           DB_TYPE    Filesystem                                        Size     Used    Available    AutoExtend Size  DiskGroup
-------            -------    ------------                                    ------    -----    ---------   ----------------   --------
JDEPROD, JDETEST   non-CDB    /u01/app/oracle/oradata/datastore                   31G    16.26G      14.74G              3G        REDO
                               /u02/app/oracle/oradata/datastore                 4598G  4446.22G     151.78G            102G        DATA
                               /u01/app/oracle/fast_recovery_area/datastore      1370G   761.84G     608.16G             36G        RECO

In my example it’ll add 102GB when low.  So before I resized the /TESTING2 repo, things looked like this:

/dev/asm/datastore-216
                      4714397696 4661989408   52408288  99% /u02/app/oracle/oradata/datastore

After resizing

/dev/asm/datastore-216
                      4821352448 4662201568  159150880  97% /u02/app/oracle/oradata/datastore

So it’s seen that there is some free space (the 1TB I stole) and has given this back to the data area.

Note that I could have done this with oakcli resize repo (but I did not know that at the time).

Generate missing indexes in 1 environment from another–oracle

$
0
0

I’ve done a heap of tuning in production, created a bunch of indexes and I’m pretty happy with how it looks.  Remember that you only need to create the indexes in the database if they are for tuning – they don’t need to be added to the table specs in JDE.

So, how do I easily generate all of the DDL for these indexes and create them in other locations?

I’’’ generate the create index statements while reconciling

select 'SELECT DBMS_METADATA.GET_DDL(''INDEX'',''' || index_name || ''',''' || OWNER || ''') ||'';'' FROM dual ;'
from all_indexes t1 where t1.owner = 'CRPDTA' and not exists (select 1 from all_indexes t2 where t2.owner = 'TESTDTA' and t1.index_name = t2.index_name) ;

Which will give you a bunch of results like this:


SELECT DBMS_METADATA.GET_DDL('INDEX','F0902_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F41001_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F3413_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F03B11_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F03012Z1_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F03012Z1_ORA0','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F01151_ORA0','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F5646_SRM1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F0414_9SRM','CRPDTA') ||';' FROM dual ;

So whack some headers on this to trim the output:

set heading off
set feedback off
set long 99999
set pages 0
set heading off
set lines 1000
set wrap on

And use the run script button in SQL Developer:

image

You’ll get a pile of output like this:


CREATE INDEX "CRPDTA"."F01151_ORA0" ON "CRPDTA"."F01151" ("EAAN8", "EAIDLN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL ;


CREATE INDEX "CRPDTA"."F5646_SRM1" ON "CRPDTA"."F5646" ("ALDOCO", "ALLNID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL ;

You can then change the tablespace and owner information and run in your other environments.

Slightly interesting… How big could my data get?

$
0
0

Forget the custom tables, but if you have 13,844,608 rows in your sales history table, then in oracle, that is going to be about 34GB, so we are talking about 2.3GB per million rows.

This is handy and simple maths for working out data growth and what that might mean to you.  that F0911 is a classic!  306GB for 255Million.


. . imported "CRPDTA"."F42119"                           33.72 GB 13844608 rows
. . imported "CRPDTA"."F04572OW"                         11.74 GB 4133179 rows
. . imported "CRPDTA"."F4111"                            139.9 GB 165666358 rows
. . imported "CRPDTA"."F4101Z1"                          7.578 GB 3731963 rows
. . imported "CRPDTA"."F3111"                            71.49 GB 85907305 rows
. . imported "CRPDTA"."F3102"                            20.75 GB 61873382 rows
. . imported "CRPDTA"."F47012"                           2.870 GB 2013678 rows
. . imported "CRPDTA"."F4074"                            70.35 GB 108624053 rows
. . imported "CRPDTA"."F56105"                           12.09 GB 43949816 rows
. . imported "CRPDTA"."F47003"                           19.27 GB 48747556 rows
. . imported "CRPDTA"."F43199"                           22.07 GB 11543632 rows
. . imported "CRPDTA"."F03B11"                           12.09 GB 10304061 rows
. . imported "CRPDTA"."F5646"                            8.429 GB 27358198 rows
. . imported "CRPDTA"."F4211"                            1.155 GB  478334 rows
. . imported "CRPDTA"."F6402"                            6.649 GB 43428152 rows
. . imported "CRPDTA"."F4105"                            20.18 GB 56430529 rows
. . imported "CRPDTA"."F0911"                            306.0 GB 254224657 rows
. . imported "CRPDTA"."F4006"                            3.686 GB 5964283 rows
. . imported "CRPDTA"."F47047"                           14.19 GB 6007023 rows
. . imported "CRPDTA"."F43121"                           28.40 GB 17036864 rows
. . imported "CRPDTA"."F03B13"                           1.600 GB 1842917 rows
. . imported "CRPDTA"."F1632"                            1.647 GB 5088270 rows
. . imported "CRPDTA"."F47036"                           1.628 GB 1808215 rows
. . imported "CRPDTA"."F57205"                           1.630 GB 4249200 rows
. . imported "CRPDTA"."F470371"                          15.32 GB 5804643 rows
. . imported "CRPDTA"."F6411"                            1.625 GB 6558558 rows
. . imported "CRPDTA"."F6412"                            1.605 GB 9334567 rows
. . imported "CRPDTA"."F4079"                            1.532 GB 6868444 rows
. . imported "CRPDTA"."F42420"                           1.514 GB 1392630 rows
. . imported "CRPDTA"."F0101Z2"                          1.331 GB  683139 rows
. . imported "CRPDTA"."F4311"                            13.42 GB 6834435 rows
. . imported "CRPDTA"."F3460"                            1.490 GB 6548503 rows
. . imported "CRPDTA"."F5763"                            3.708 GB 10283632 rows

Viewing all 541 articles
Browse latest View live