Thursday 15 December 2016

Installing AWS Inspector Agent on EC2 Hosts

Installing AWS inspector Agent on EC2 Hosts



1. wget https://d1wk0tztpsntt1.cloudfront.net/linux/latest/install

2. sudo bash install

Installed:
  AwsAgent.x86_64 0:1.0.578.0-100578

Complete!
HTTP/1.1 200 OK

3. to stop :
sudo /etc/init.d/awsagent stop

4. to start :
sudo /etc/init.d/awsagent start
configuration file /opt/aws/awsagent/etc/agent.cfg

to check the status
sudo /opt/aws/awsagent/bin/awsagent status



Known issues:

[ec2-user@*********~]$ sudo bash install
install: line 9: class: command not found
install: line 11: syntax error near unexpected token `('
install: line 11: ` undef_method m unless m =~ /(^__|^send$|^object_id$)/'


Solution:  Make sure to move all the existing files: install* and then download using wget as seen above in step1.


Tuesday 15 November 2016

EC2 “On demand” instances start/Stop Automation:


1.       Create an IAM user and grant full access to the EC2 instances.

2.       Download the Access id and Secret key for the IAM user.

3.       Shell script to perform the start and stop

Start:

#!/bin/sh
ec2-start-instances i-“instance-id” i-“instance-id” -O “access-id” -W “secret-key”

Stop:

#!/bin/sh
ec2-stop-instances i-“instance-id” i-“instance-id” -O “access-id” -W “secret-key”

In the above script, you can add more instances-id’s to be restarted. The access id and secret key is of the IAM user performing the restart.

4.       Create a cronjob to run this on Friday evening to stop the instances and start it on Monday morning, so that you save lot of money on EC2 instances which are not used on weekends.


5.       The cron job should be run on an EC2 instance which is on the same subnet as all the instances which are to be restarted.

NAGIOS Monitoring solution for JBoss 7.1

NAGIOS Monitoring solution for JBoss  :


1.       Performed the below on server where Nagios is installed:

Defined the host, jhostip and contact group in host.cfg : under /usr/local/nagios/etc/objects


define host{
        use                     linux-server            ; Name of host template to use
                                                        ; This host definition will inherit all variables that are defined
                                                        ; in (or inherited by) the linux-server host template definition.
        host_name               Prod***-APP1
        alias                   Prod***-APP1
        contact_groups          jboss-admin
        notifications_enabled   1
        address                 10.1.*.***
        }
define host{
        use                     linux-server            ; Name of host template to use
                                                        ; This host definition will inherit all variables that are defined
                                                        ; in (or inherited by) the linux-server host template definition.
        host_name               Prod***-APP2
        alias                   Prod***-APP2
        contact_groups          jboss-admin
        notifications_enabled   1
        address                 10.1.*.***
        }

2.       In the services.cfg, defined the services to be monitored like ping, 8080 port, load :

Define a service to check HTTP on the local machine.
# Disable notifications for this service by default, as not all users may have HTTP enabled.

define service{
        use                             local-service         ; Name of service template to use
        host_name                       App1
        service_description             HTTP
        check_command                   check_http!-p 8080
        check_interval                  1
        notifications_enabled           0
        max_check_attempts               4
        event_handler                   check_stopjb

3.       Since PRODUCTION had multiple servers, defined multiple services.cfg file, specific to each host to be monitored. For example below, I have defined 2 services.cfg file for 2 Jboss hosts:

[ec2-user@prodnagios-server prod_services]$ ls -ltr
total 88
-rw-r--r-- 1 nagios nagios 4363 Jun 23  2015 services.cfg
-rw-r--r-- 1 nagios nagios 5130 Jul 21  2015 prodAP1_Services.cfg
-rw-r--r-- 1 nagios nagios 4832 Jul 21  2015 prodAP2_Services.cfg
-
[ec2-user@prodnagios-server prod_services]$ pwd
/usr/local/nagios/etc/objects/prod_services

And each file have entries of services to be monitored like example:

define service{
        use                             local-service         ; Name of service template to use
        host_name                       Prod***-APP1
        service_description             HTTP
        check_command                   check_http!-p 8080
        check_interval                  1
        notifications_enabled           1
         notification_options            w,u,c,r
        contact_groups                  jboss-admin
        max_check_attempts               4
#        event_handler                   check_stopjb
        }

4.       All the above changes, have defined in main Nagios config file:  Nagios.conf:

# prod services
cfg_file=/usr/local/nagios/etc/objects/prod_services/prodAP1_Services.cfg
cfg_file=/usr/local/nagios/etc/objects/prod_services/prodAP2_Services.cfg

# Definitions for monitoring the local (Linux) host
cfg_file=/usr/local/nagios/etc/objects/host.cfg

5.       Made sure 8080,22 and all required ports are reachable to Jboss hosts by putting Nagios to appropriate security group in AWS. And also Nagios nrpe plugin is installed on all Jboss servers.

And ran the verify to make sure the configuration worked fine :



[ec2-user@prodnagios-server etc]$ sudo /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Nagios Core 4.0.8
Copyright (c) 2009-present Nagios Core Development Team and Community Contributors
Copyright (c) 1999-2009 Ethan Galstad
Last Modified: 08-12-2014
License: GPL

Website: http://www.nagios.org
Reading configuration data...
   Read main config file okay...
   Read object config files okay...

Running pre-flight check on configuration data...

Checking objects...
        Checked 77 services.
        Checked 12 hosts.
        Checked 1 host groups.
        Checked 0 service groups.
        Checked 6 contacts.
        Checked 1 contact groups.
        Checked 35 commands.
        Checked 5 time periods.
        Checked 0 host escalations.
        Checked 0 service escalations.
Checking for circular paths...
        Checked 12 hosts
        Checked 0 service dependencies
        Checked 0 host dependencies
        Checked 5 timeperiods
Checking global event handlers...
Checking obsessive compulsive processor commands...
Checking misc settings...

Total Warnings: 0
Total Errors:   0

Things look okay - No serious problems were detected during the pre-flight check

6.       Could monitor the Jboss through console:




7.       Also defined my email and Mobile to get alerts in contacts.cfg file:

The contacts.cfg file must be defined in Nagios.cfg file as well:

# host groups, contacts, contact groups, services, etc.
cfg_file=/usr/local/nagios/etc/objects/contacts.cfg


define contact{
        contact_name                    SandeshSMS          ; Short name of user
        use                             generic-contact         ; Inherit default values from generic-contact template (defined above)
        alias                           Nagios Admin            ; Full name of user

        email                          **********@txt.att.net       ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******
       service_notifications_enabled    1
        }


define contact{
        contact_name                    Sandesh         ; Short name of user
        use                             generic-contact         ; Inherit default values from generic-contact template (defined above)
        alias                           Nagios Admin            ; Full name of user

        email                           Sandesh.achar@********.com       ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******
       service_notifications_enabled    1
        }



Thursday 27 October 2016

Mysql Database Import fails in AWS RDS : ERROR 1449 (HY000) at line 27101: The user specified as a definer ('dbadmin'@'%') does not exist

Mysql Database Import fails in AWS RDS 

Error message :

ERROR 1449 (HY000) at line 27101: The user specified as a definer ('dbadmin'@'%') does not exist


Syntax :

[ec2-user@ip-*******~]$ mysql -u***** -p***** PROD_DATABASE < PROD_DATABASE.SQL


Solution:

The AWS RDS doesnt store the 'localhost' term in the backed-up mysql dump.
localhost is replaced with '%'. Hence the dump file needs to be updated with localhost in all places it exists with username@'%'.


Update the dump file with 'localhost' to resolve.





Tuesday 6 September 2016

ORA-04030: out of process memory when trying to allocate 4088 bytes while ADOP in 12.2.3

ORA-04030: out of process memory when trying to allocate 4088 bytes while ADOP in 12.2.3


Solution:

·        Restart of the adworker fixed the issue
·        Recommendation if persist:

Try one of the following workarounds :

 Change the page count to 200k at the OS level:
 $ more /proc/sys/vm/max_map_count
 $ sysctl -w vm.max_map_count=200000

 -or-

 Adjust the realfree heap pagesize within the database by
 setting the following parameters in the init/spfile and restart the database.
 _use_realfree_heap=TRUE
 _realfree_heap_pagesize_hint = 262144

Then try restarting the worker

Monday 25 July 2016

Very High CPU usage, however no database/application processing on Linux platforms.

Very High CPU usage, however no database/application processing on Linux platforms.

The process khugepaged  has to be checked to find its  CPU usage. It might utilize 100% CPU.

Cause:

Many latest Linux distributions ship with Transparent Hugepages enabled by default. When Linux uses Transparent Hugepages, the kernel tries to allocate memory in large chunks (usually 2MB), rather than 4K. This can improve performance by reducing the number of pages the CPU must track. However, some applications still allocate memory based on 4K pages. This can cause noticeable perfomance problems when Linux tries to defrag 2MB pages.

Solutions:

If you are running Cassandra database  on this linux server:

1. A temporary fix: drop caches by entering:

sync && echo 3 > /proc/sys/vm/drop_caches

2. Efficient  solution: disable defrag for hugepages by entering:

echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defrag

Another alternative: add -XX:+AlwaysPreTouch to the jvm.options file. This change should be tested carefully before being put into production.

Wednesday 15 June 2016

Oracle EBS 12.1.3 to 12.2.4 Upgrade Issues

Completed a Successful Upgrade  iteration on one of the environment: 12.1.3 to 12.2.4

Below Mentioned are the few issues faced and their resolution:

The Install was done on a new Application server and then performed Upgrade.


1. While executing rapidwiz for Online Application ( 12.2.0)  installation, facing an error related to jre file:

libXtst.so.6: cannot open shared object file: No such file or directory


Solution:


Installed OS Package:  xorg-x11-libs-compat-6.8.2-1.EL.33.0.1.i386
AND

# unlink /usr/lib/libXtst.so.6
# ln -s /usr/X11R6/lib/libXtst.so.6.1 /usr/lib/libXtst.so.6



2. Error while adop :

Checking for existing adop sessions.
    [UNEXPECTED]Duplicate rows found for host appsmines01 in FND_OAM_CONTEXT_FILES table
*******FATAL ERROR*******
PROGRAM : (/u01/app/***/***/fs1/EBSapps/appl/ad/12.0.0/bin/adzdoptl.pl)
TIME    : Sat Jun 11 09:36:47 2016
FUNCTION: ADOP::CommonUtil::getAppltopId [ Level 1 ]
ERRORMSG: Duplicate rows found for host ausuldppimebs01 in FND_OAM_CONTEXT_FILES table


Solution:

Found duplicate entries for apps contextfile in fnd_oam_context_files
Repopulated xml entries by : fnd node cleanup , autoconfig on both db and apps side.



3. 12.2.4 upgrade patch 17919161 failed with below error :


ADOP session:

*******FATAL ERROR*******
PROGRAM : (/u01/app/***/fs1/EBSapps/appl/ad/12.0.0/bin/adzdoptl.pl)
TIME    : Sat Jun 11 20:58:29 2016
FUNCTION: ADOP::GlobalVars::_validateApplyRestartArgs [ Level 1 ]
ERRORMSG: When running adop after a previous patching cycle failed, you must specify either the 'abandon' or 'restart' parameter .


[STATEMENT] Please run adopscanlog utility, using the command

"adopscanlog -latest=yes"

to get the list of the log files along with snippet of the error message corresponding to each log file.


adop exiting with status = 255 (Fail)


LOGFILE:

The following Oracle Reports objects did not generate successfully:
au      plsql   INVISMMX.pll
An error occurred while generating Oracle Reports library files.
***
Continue as if it were successful :
***


Solution:

Installed OS package: openmotif21-2.1.30-11.EL7.i686.rpm and restaretd ADOP ( restart=yes).



4. on Database server fails:


$ perl cr9idata.pl
Can't locate File/CheckTree.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . /u01/app/oracle/product/11.2.0.4/db_5/appsutil/perl /u01/app/oracle/product/11.2.0.4/db_5/appsutil/perl /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at cr9idata.pl line 40.
BEGIN failed--compilation aborted at cr9idata.pl line 40.


Solution:


export PERL5LIB=$ORACLE_HOME/perl/lib/5.10.0:$ORACLE_HOME/perl/site_perl/5.10.0:$ORACLE_HOME/appsutil/perl
export PATH=$ORACLE_HOME/perl:$ORACLE_HOME/perl/lib:$ORACLE_HOME/perl/bin:$PATH




5. System Tablespace low on space: Patch hangs:

TABLESPACE_NAME                PERCENTAGE_USED SPACE_ALLOCATED SPACE_USED SPACE_FREE  DATAFILES
------------------------------ --------------- --------------- ---------- ---------- ----------
SYSTEM                                   99.31           12326   12240.91      85.09         13


Solution:


SQL> alter tablespace system add datafile '+DATA_1' size 25G;

Tablespace altered.


6.  After Upgrade, opmn/Apache doesnt start with below error :

/u01/app/***/fs1/inst/apps/*****/admin/scripts/adapcctl.sh: line 161: /u01/app/****/fs1/FMW_Home/webtier/instances/EBS_web_****_OHS1/bin/opmnctl: No such file or directory


Solution:

After R12.2.4 Upgrade Found OPMNCTL Missing While Starting Services (Doc ID 1953456.1)


Monday 6 June 2016

Oracle EBS 12.2.0 Rapid Install wizard Fails with error : libXtst.so.6: cannot open shared object file: No such file or directory


While executing rapidwiz for Online EBS 12.2.0 installation, facing an error related to jre file:

libXtst.so.6: cannot open shared object file: No such file or directory

1. The libXtst.so.6 file is getting accessed by rapidwiz from /u01/app/oracle/patch/R122v50/Stage/startCD/Disk1/rapidwiz/jre/Linux_x64/1.6.0/lib/i386/xawt/.

The RapidWiz is invoked from : /u01/app/oracle/patch/R122v50/Stage/startCD/Disk1/rapidwiz

2. The TMPDIR And _java_options are already set , before executing runit.sh , also made sure to manually set the values and execute. But no Luck.

3. Created a softlink :

$ pwd
/u01/app/oracle/patch/R122v50/Stage/startCD/Disk1/rapidwiz/jre/Linux_x64/1.6.0/lib/i386/xawt
$ ls -ltr
total 352
-rwxrwxrwx 1 applmgr oinstall 353146 May 26  2015 libmawt.so
lrwxrwxrwx 1 applmgr dba          23 Jun  6 11:05 libXtst.so.6 -> /usr/lib64/libXtst.so.6

However after this the rapidwiz gives error: libxtst.so.6 wrong elf class. This dint resolve.


Resolution:

when compared on a server where already installation was successfully completed, saw the below mentioned OS package missing:

$  rpm -qa --qf='%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n' xorg-x11-libs*
xorg-x11-libs-compat-6.8.2-1.EL.33.0.1.i386
$

The above package wasn't existing in the server, where we had issue:

$  rpm -qa --qf='%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n' xorg-x11-libs*
$


Installed the below packages:

xorg-x11-libs-compat-6.8.2-1.EL.33.0.1.i386.rpm
binutils-2.17.50.0.6-6.0.1.el5.i386.rpm

Performed the below:

# unlink /usr/lib/libXtst.so.6
# ln -s /usr/X11R6/lib/libXtst.so.6.1 /usr/lib/libXtst.so.6


Resolved !!!!

Tuesday 24 May 2016

Actualize the old database editions after online patching to avoid possible performance issues: EBS 12.2.3/12.2.4


During Online patching (ADOP) : An additional column ZD_EDITION_NAME is populated in the seed tables.This happens during th prepare phase.
Online patching doesn't modify runtime seed data by the use of editioned data storage. Creating  a (patch) copy of the seed data, which is stored in the same table happens, as seen below :




Every time we perform online patching , there will an old database edition entry and this will accumulate as and when we do more online patching's.
Oracle suggests that we perform actualize_all after this reaches a count of 25. However it would be time consuming to perform the cleanup after the count has increased.

If the number of these grows too large, system performance will start to be affected. When the number of old database editions reaches 25 or more, you should consider dropping all old database editions by running the adop actualize_all phase and then performing a full cleanup.


Perform this every-time when there is no immediate need for online patching:


Before starting, you should ensure that the system has the recommended database patches and latest AD-TXK code level installed.

To proceed, run the following commands in the order shown:

$ adop phase=prepare
$ adop phase=actualize_all
$ adop phase=finalize finalize_mode=full
$ adop phase=cutover
$ adop phase=cleanup cleanup_mode=full
You have now completed removal of the old database editions.



OR

Every-time online patching is performed:

adop phase=actualize_all   can be performed just before phase=finalize/cutover

Tuesday 17 May 2016

Adpreclone Fails in EBS 12.2.3 : There is already an ACTIVE ADOP CYCLE with session id

Adpreclone Fails in EBS 12.2.3 : There is already an ACTIVE ADOP CYCLE with session id

Issue:

The below failure while executing :

Running perl /AB01/app/PROD/fs2/prodapps/appl/ad/12.0.0/patch/115/bin/adProvisionEBS.pl ebs-get-serverstatus -contextfile=/AB01/app/PROD/inst/fs2/inst/apps/PROD_AB01PRODapp01/appl/admin/PROD_AB01PRODapp01.xml -servername=AdminServer -promptmsg=hide

The Oracle WebLogic Administration Server is up.

There is already an ACTIVE ADOP CYCLE with session id : SP2-0640: Not connected 
adpreclone cannot be run with pending ADOP session.


Details:

Looking at the error message, it seems like  there is an open ADOP session. Basically in  Oracle EBS R12.2, we need to have all ADOP sessions completed or aborted before any cloning can be executed. While An active ADOP session , If any clone is performed, both filesystems can be put into trouble: RUN and PATCH. Hence we need to clear all ADOP sessions before executing adpreclone.


However when checked the status using: adop -status -detail , there were no active sessions.


Solution:

Patch 22271970: ADPRECLONE.PL FAILS ON DELTA 7

adop phase=apply patches=22271970

Thursday 12 May 2016

Purge Inactive Sessions running from more than 24 Hours. 3266951 records in ICX_SESSIONS

Purge Inactive Sessions running from more than 24 Hours. 3266951 records in ICX_SESSIONS

EBS: 12.2.3
RAC Database: 11.2.0.4

Solution :

1. Backup tables
2. Stop the Apache server : This is required otherwise end users will receive session-expired/lost messages.

3. Execute the below delete statement to purge the ICX_SESSIONS table
delete FROM ICX_SESSIONS
WHERE (nvl(disabled_flag,'N') = 'Y')
OR (nvl(disabled_flag,'N') = 'N'
AND (last_connect + 1 + (fnd_profile.value_specific( 'ICX_LIMIT_TIME', user_id, responsibility_id, responsibility_application_id, org_id))/24)< sysdate);

4. Run the Purge concurrent program.
E-Business suite R12: $FND_TOP/sql/FNDDLTMP.sql

5. Restore any data
6. Restart Apache server


Details:

1. We had data from 2015 and quite enough data to be deleted which was more than Profile option ICX limit time set:

SELECT *
FROM ICX_SESSIONS
WHERE (nvl(disabled_flag,'N') = 'Y')
OR (nvl(disabled_flag,'N') = 'N'
AND (last_connect + 1 + (fnd_profile.value_specific( 'ICX_LIMIT_TIME', user_id, responsibility_id, responsibility_application_id, org_id))/24)< sysdate);


3M Records.

2. In the script FNDDLTMP.sql, it seems it just kept running in the first loop and never moved to the DELETE statements. When tried the delete statements in the scripts, manually, none of 
the rows got updated.



The Dead LOOP in script FNDDLTMP.sql:

CURSOR c_abandoned_sessions
IS
SELECT user_id, login_id, last_connect,limit_time,
responsibility_id,responsibility_application_id, org_id
FROM icx_sessions
WHERE (nvl(disabled_flag,'N') = 'N')
and (last_connect + 1 + (nvl(limit_time,4)/24)) <sysdate;

BEGIN
FOR session_rec in c_abandoned_sessions
LOOP
-- END DATE abandoned session using FND_LOGINS
-- Assume that a session is considered abandoned when it has
-- been without inactivity for 24 hours + ICX_LIMIT_TIME
-- if last_connect is null will update with sysdate.
FND_SIGNON.audit_user_end(session_rec.login_id, session_rec.last_connect +
nvl(session_rec.limit_time,4)/24);
END LOOP;
COMMIT;
END;
/


The deletes statements in script FNDDLTMP.sql:


delete icx_sessions
where (nvl(disabled_flag,'N') = 'Y')
or (nvl(disabled_flag,'N') = 'N' and
(last_connect + 1 + nvl(limit_time,4)/24 <sysdate));

delete icx_session_attributes
where session_id not in (select session_id from icx_sessions);

delete icx_transactions
where session_id not in (select session_id from icx_sessions);

delete icx_text
where session_id not in (select session_id from icx_sessions);

delete icx_context_results_temp
where datestamp < sysdate - 1/24;

-- deleting unsuccesful log information after 30 days.
delete icx_failures
where creation_date < SYSDATE - 30;

delete fnd_session_values
where ICX_SESSION_ID not in (select session_id from icx_sessions);


The actual purge script in FNDDLTMP.sql ( which was never invoked in our case):


begin
fnd_bc4j_cleanup_pkg.delete_transaction_rows(SYSDATE - 4/24);
fnd_bc4j_cleanup_pkg.delete_control_rows(SYSDATE - 4/24);
end;
/


3. VERY Interesting Find was that its a bug in Oracle code and below is the reason:


Based on the SQL ID which got executed by the Purge program (SQL ID 32czjgd3hu7zf), the issue is caused by a bug in the standard
code. This is the execution plan:
================================================================================
Inst: 3
SQL ID: 32czjgd3hu7zf
Child number: 0
Plan hash value: 2599260651

----------------------------------------------------------------------------------
| Id | Operation | Name | E-Rows |E-Bytes| Cost (%CPU)| E-Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 130K(100)| |
| 1 | FOR UPDATE | | | | | |
|* 2 | TABLE ACCESS FULL| FND_LOGINS | 1 | 16 | 130K (9)| 00:26:06 |
----------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

1 - (NUMBER): 1234

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter((TO_NUMBER("C_LOG"."SPID")=:B1 AND "END_TIME" IS NULL))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - "C_LOG".ROWID[ROWID,10]
2 - "C_LOG".ROWID[ROWID,10], "END_TIME"[DATE,7],
"C_LOG"."SPID"[VARCHAR2,30]

The problem is a full table scan on FND_LOGINS (4.7G in size) which is caused by the fact Oracle pass a numeric bind
variable, but SPID is a VARCHAR2 column. There is an index on SPID (FND_LOGINS_N1), but it can’t be used because
the data type is wrong. The full table scan tables 570 seconds (and I can see it has been executed 254 times), but if I run
the following query (with ‘1234’ passed as a VARCHAR2) it uses the index and runs in under a second and returns
63650 rows:
select count(*) from applsys.FND_LOGINS where spid='1234';

So fix that SQL and at least this piece will run quickly.

Unfortunately this SQL is in the FND_SIGNON package, line 90:
procedure AUTH_LOGOUT_UPD(p_pid number,
p_pend_time date) is

TYPE Ty_rowid IS TABLE OF ROWID
INDEX BY BINARY_INTEGER;

L_ROWID Ty_rowid;

cursor get_upd_rowid is
select c_log.rowid
from fnd_logins c_log
where c_log.spid = p_pid
and end_time is null
FOR UPDATE SKIP LOCKED;

Begin


So the AUTH_LOGOUT_UPD procedure either needs to be changed to pass PID as the correct datatype, or
FND_LOGINS table should actually have the correct datatype.



SQL> desc applsys.fnd_logins
Name Null? Type
----------------------------------------- -------- ----------------------------
LOGIN_ID NOT NULL NUMBER
USER_ID NOT NULL NUMBER
START_TIME NOT NULL DATE
END_TIME DATE
PID NUMBER
SPID VARCHAR2(30)
============================

Checking this note:
What is the Relationship Between the ICX_SESSIONS Table and the FND_LOGINS Table? ( Doc ID 358823.1 ) 

It shows this is the relationship between v$process and v$session:
select count(distinct d.user_name)
from apps.fnd_logins a,
v$session b, v$process c, apps.fnd_user d
where b.paddr = c.addr
and a.pid=c.pid
and a.spid = b.process
and d.user_id = a.user_id
and (d.user_name = 'USER_NAME' OR 1=1);

So SPID actually refers to v$session.process column which is VARCHAR2.

Given that ‘PID’ on v$process is numeric, SPID is VARCHAR2, THIS is a code defect which Oracle has written.

HOWEVER in EBS 12.2.4  this looks rectified as seen below:

Code has been fixed to pass in p_pid as a varchar2: 
============================== 
$ strings -a AFSCSGN*.pls|grep '$Header' 
/* $Header: AFSCSGNB.pls 120.12.12020000.7 2015/08/19 19:56:18 jwsmith ship $ */ 
/* $Header: AFSCSGNS.pls 120.6.12020000.4 2015/06/18 09:58:15 absandhw ship $ */ 
============================== 
-- 
-- AUTH_LOGOUT_UPD (added for bug 18903648 ) 
-- 
-- 
procedure AUTH_LOGOUT_UPD(p_pid varchar2, 
p_pend_time date) is 

TYPE Ty_rowid IS TABLE OF ROWID 
INDEX BY BINARY_INTEGER; 

L_ROWID Ty_rowid; 
l_tmp_spid FND_LOGINS.SPID%TYPE; 

cursor get_upd_rowid is 
select c_log.rowid 
from fnd_logins c_log 
where c_log.spid = l_tmp_spid 
and end_time is null 
FOR UPDATE SKIP LOCKED; 
============================== 









Wednesday 11 May 2016

When clearing all cache in Functional Administrator or Using OA framework, it errors With Error 404--Not Found : EBS 12.2.3

When clearing all cache in Functional Administrator or Using OA framework, it errors With Error 404--Not Found : EBS 12.2.3

EBS: 12.2.3
RAC Database: 11.2.0.4
Multinode Application


Issue:

1. Responsibility – Functional Administrator
2. Goto Tab “Core Services” > Caching Framework
3. Select Global Configuration
4. Click on Button – Clear All Cache

Error 404--Not Found
From RFC 2068 Hypertext Transfer Proocol -- HTTP/1.1:
10.4.5 404 Not Found


ALSO:

AR responsibility> Customers> Customers> Search Customer using Account Number:


Error 404--Not Found
From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
10.4.5 404 Not Found


Solution:

Oracle Document ID with same issue: Doc ID 2005982.1 : DINT Not fix.

Steps followed to resolve:

cd $OA_HTML
cd ..
cp - r html html.bak
unzip -o ebsuix-install.zip
cd $OA_HTML/cabo/jsps
$FND_TOP/patch/115/bin/jjspcompile.pl -compile -s 'frameredirect.jsp' --flush
$FND_TOP/patch/115/bin/jjspcompile.pl -compile

restart (Cleanup)

Thursday 28 January 2016

Performance issue when multiple users trying to login into Oracle EBS 12.1 at the same time.



Performance issue when multiple users trying to  login into Oracle EBS 12.1 at the same time:


Oracle EBS 12.1
Oracle Database 11.2.0.4


Problem Statement:

Multiple users trying to Login to Oracle Apps at same time , ie, when one of the Global Business Unit Comes online.
Most of the connections hangs doing nothing. For those who are already logged in , the forms navigation is extremely slow.


Resolution:

-        The performance problem is related with the fact that a lot of users are connecting at the same time and some connections get hung for some minutes doing nothing,
it could be due to the fact that oracle 11g jdbc driver is hanging/blocked by /dev/random ,ie, entropy pool empty.  

Usually Forms keeps a pool of jdbc connections.If more connections are needed, it needs to create new connections, and sometimes JDBC takes some time to create the connections because of the /dev/random problem.


The below is the solution:

-        -Create soft link : change in linux /dev/random to be a link to /dev/urandom.  
         This will be in effect till the server is rebooted.
       

  To make the changes  permanent :

In context files, adding  the following in all the jvm_start_options (forms, oacore, oafm):


-Djava.security.egd=file:/dev/./urandom

And run autoconfig.



Note: Entropy is the random things generated by a computer and used by Linux to generate random numbers. Linux uses the key strokes by an user to generate entropy and the entropy generates the random numbers in /dev/random.



Tuesday 5 January 2016

Upgrade Oracle Enterprise Manager 12.1.0.5 to 13.1

Upgrade Oracle Enterprise Manager 12.1.0.5 to 13c.


1) Apply the latest PSU to the Repository 12.1.0.2.0 Database.

2) Download zips and install script to the Management Server.

3) Run an one system upgrade .


DONE !!!!!!!


Reference Documentation: http://docs.oracle.com/cd/E63000_01/EMUPG/toc.htm