Wednesday, January 30, 2008

Troubleshooting Workflow Performance

Have you ever had some stuff that was taking a long-time to run, and wanted to figure out where it was spending it's time. These SQL statements may help you figure it out. You need to change the value of the date string to the point in time you want to start tracking...

-- ----------------------------------------------------------------------
-- Lists all workflows instances that have fired since a given date and
-- time, in the order they fired along with their status
-- ----------------------------------------------------------------------
select wf_name, bo_type_name, bo_event_name,
case
when status_id = 1 then 'In Progress'
when status_id = 40 then 'Running'
when status_id = 90 then 'Stopped'
when status_id = 100 then 'Completed'
else 'Unknown'
end as status
,start_date, end_date
from wf_template
where template_flag not in (10,20,25,30) -- 10,20,25,30 are templates, everything else is an instance
--and status_id <> 100 -- 100 are completed, so uncomment this to see only those that haven't finished running
and created_date > to_date('01/29/2008 01:23:45 PM', 'MM/DD/YYYY HH:MI:SS AM')
order by created_date asc

-- ----------------------------------------------------------------------
-- Lists workflows instances that have fired since a given time, along
-- with how long it took for them to process
-- ----------------------------------------------------------------------
select wf_name, bo_type_name, bo_event_name, trunc(86400*(end_date-start_date))-60*(trunc((86400*(end_date-start_date))/60)) as processing_time_secs

from wf_template
where template_flag not in (10,20,25,30) -- 10,20,25,30 are templates
and created_date > to_date('01/29/2008 01:23:45 PM', 'MM/DD/YYYY HH:MI:SS AM')
order by 4 desc

-- ----------------------------------------------------------------------
-- Gives a average processing time and number of times a given workflow
-- (by name) has fired since a specified date and time
-- ----------------------------------------------------------------------
select wf_name, count(*), avg(trunc(86400*(end_date-start_date))-60*(trunc((86400*(end_date-start_date))/60))) as processing_time_secs

from wf_template
where template_flag not in (10,20,25,30) -- 10,20,25,30 are templates
and created_date > to_date('01/29/2008 01:23:45 PM', 'MM/DD/YYYY HH:MI:SS AM')
group by wf_name
order by 3 desc

Thursday, January 24, 2008

Bypassing the TRIRIGA 9 Approval Process

Do you use the built-in approval process in TRIRIGA 9? If you do not use it, but you don't follow the steps outlined below you are allowing your system to generate unnecessary business objects, create unnecessary associations and run unnecessary workflows. All of these unnecessary items result in slower system performance and increased space usage.


Using assets as an example, most of my clients never use the out-of-box approval process for activating an asset. Here's what I do to bypass the approval process

1. Retire Workflow: triAsset - triActivate - Submit for Approval and Dependant Record Update
2. Modify Workflow: triAsset - triActivate - Make Dependant Records Read Only (add a step at the end of the workflow to have it call triFinalApprovalHidden)

By doing this your are calling executing the hidden approval step without creating any of the intermediate approval objects. It is also easy to put this back in place if you choose to use the approval process in the future, simply reverse the process. The workflows for most objects have the same name as the one's above (except for the BO name) and the process to disable them is the same.

Additionally, most of my clients don't need or want any notifications on status change of an object. If you don't need a notification on any status change for a particular object (assets in this example), bypassing this process is easy, simply retire the following workflow: triAsset - Associate - triStatus - Create Notification Details current status. The saves the system from having to run this workflow for every status change of every object. To turn it back on, just republish the workflow.

Tuesday, January 22, 2008

IBS_SPEC_ASSIGNMENTS Table

Is your IBS_SPEC_ASSIGNMENTS table getting too large? One of my clients currently has 19 GB of data in over a quarter billion rows in this table alone. Here is some SQL to help analyze the situation:

This SQL Statement will determine what the top business objects and association types are in the ibs_spec_assignments table:

select spec_class_type, ass_type, count(*) as cnt
from ibs_spec_assignments
group by spec_class_type, ass_type
order by cnt desc

You may be able to narrow down some records to clean up from there. Also, you may want to find out what exists in the ibs_spec_assignments table that is no longer in the ibs_spec table:

select count(*) from ibs_spec_assignments where spec_id not in (select spec_id from ibs_spec)

and

select count(*) from ibs_spec_assignments where ass_spec_id not in (select spec_id from ibs_spec)


I removed the cleanup script that I used at the request of TRIRIGA. You can send me an email if you need more information.


Friday, January 18, 2008

Worklfow Cleanup Script

The development group at TRIRIGA indicated that they have seen performance issues on workflow cleanup once you get a lot of instance data in the tables. It is recommended you use SQL to clean it. Below is a script TRIRIGA developement wrote to do the clean up (works on Momentum 2.5 and above platform).
Bring down the process server to kill the clean process, then run this script. It will prompt for a date, you should use today or yesterday to clean as much as possible.

-- Workflow Instance information is in the following tables:
-- (many also have template information - so they can't simply be truncated)
--
-- TASK
-- TASK_ASSIGNEES
-- TASK_FILTER
-- TASK_RESULT_LIST
-- WF_CALLWF_RESULT_LIST
-- WF_CONDITION
-- WF_TEMPLATE_STEP
-- OBJECT_TYPE_MAP
-- WF_STEP_INST
-- WF_INSTANCE
-- WF_TEMPLATE
--


-- Select WF Instances that ended at a point older than 'date' and are not active or waiting for a user.
-- SELECT WFT.* FROM WF_TEMPLATE WFT WHERE WFT.TEMPLATE_FLAG = 0 AND WFT.STATUS_ID NOT IN (40,60,61) AND WFT.END_DATE < version =" 0" version =" 0" wf_template_id =" WFT.WF_TEMPLATE_ID" template_flag =" 0" wf_template_id =" WFT.WF_TEMPLATE_ID" template_flag =" 0" wf_template_id =" WFT.WF_TEMPLATE_ID" template_flag =" 0" wf_template_id =" WFT.WF_TEMPLATE_ID" template_flag =" 0" wf_pid =" WFT.WF_TEMPLATE_ID" template_flag =" 0" wf_template_id =" WFT.WF_TEMPLATE_ID" template_flag =" 0" wf_template_id =" WFT.WF_TEMPLATE_ID" template_flag =" 0" wf_template_id =" WFT.WF_TEMPLATE_ID" template_flag =" 0" wf_inst_id =" WFT.WF_TEMPLATE_ID" template_flag =" 0" wf_pid =" WFT.WF_TEMPLATE_ID" template_flag =" 0" template_flag =" 0">


Suspending Workflow Instance Logging

Saving workflow instances is helpful for troubleshooting, but can be a drain on system performance. This is espceailly true when migrating a large amount of data into your system. The scripts below can be used prior to a data migration to turn workflow instance logging off for all workflows during a data migration (saving your current settings) and restoring workflow instance logging afterward.

--------------------------------------------------------------------------------------------
-- ****************** BEFORE MIGRATION *********************------
-- Create backup of instance_data_flag values ------
-- Update all templates to set instance flag = false ------
--------------------------------------------------------------------------------------------

create table wf_template_bk as (
select wf_template_id, wf_template_version, instance_data_flag from wf_template where template_flag = 1);

create table wf_lookup_bk as (
select wf_template_id, wf_template_version, instance_data_flag from wf_lookup);

update wf_template t1 set t1.instance_data_flag = 0 where t1.template_flag = 1;
update wf_lookup t1 set t1.instance_data_flag = 0;
------------------------------------------------------------------------------------------
-- ****************** AFTER MIGRATION **********************------
-- Update from backup of instance_data_flag values ------
-- Drop backup tables ------
------------------------------------------------------------------------------------------

update wf_template t1 set t1.instance_data_flag = (select t2.instance_data_flag from wf_template_bk t2 where t1.wf_template_id = t2.wf_template_id and t1.wf_template_version = t2.wf_template_version)
where t1.template_flag = 1;

update wf_lookup t1 set t1.instance_data_flag = (select t2.instance_data_flag from wf_lookup_bk t2 where t1.wf_template_id = t2.wf_template_id and t1.wf_template_version = t2.wf_template_version);
drop table wf_template_bk;
drop table wf_lookup_bk

Monitoring the Cleanup Agent

If you want to track the progress of the cleanup agent you can use log4j to create a cleanup.log. Open the log4j.xml file in the config directory and add the following under the existing appender:














Be sure to set the File Value parameter to point to the directory you want the log file to be created in. You should probably use the same path as the existing appender in the log4j file. You will probably need to restart Jboss/Weblogic for the change to take affect. The result will be a cleanup.log file that will show you messages similar to the ones below:

DEBUG DBAnalyze - Clean up - Setting Cleanup Batch size to 900
2008-01-17 20:04:41,963 DEBUG Deleted 900 runtime process only WF Templates and associatated data
2008-01-17 22:13:48,040 DEBUG Deleted 900 runtime process only WF Templates and associatated data

Wednesday, January 16, 2008

Analyzing Long Running SQL Queries

If you are able to identify long running SQL queries in your database and want to trace them back to the report or query in the application that caused the SQL to be generated, use the following method:

Consider the following SQL statement:

select t1.spec_id ,t1.SYS_TYPE1 , t1.triUserMessageFlagTX , t1.triIdTX ,
t1.triNameTX , t1.triIdTX , t1.triNameTX , t1.triRequestedByTX , t1.triCreatedSY ,
t1.triStatusCLOBJID from
M_TRIREQUEST t1 where t1.SYS_OBJECTID>0 and t1.SYS_TYPE1 = 1.0002583E7
and t1.SYS_PROJECTID = 1.0
and isnull(t1.triStatusCL, -9223372036854775808)
not in ('Retired','Upload Error','Template', 'History', 'Deleted')
and t1.spec_id in ( select ta1.ass_spec_id from ibs_spec_assignments ta1
where ta1.spec_id in (208133)
union select ta1.spec_id from
ibs_spec_assignments ta1 where
ta1.ass_spec_id in (208133) )
order by upper(t1.triIdTX) ,upper(t1.triNameTX) ,upper(t1.triIdTX) ,upper(t1.triNameTX)

Take a fairly unique column, triRequestedByTX for example, and find all the places that is being used in conjunction with the table/view name: M_TRIREQUEST by running the following SQL query:

select * from REP_TEMPLATE_COLUMNS where field_name = 'triRequestedByTX'
and table_name = 'M_TRIREQUEST'

Then, select the rep_template_id and find the names of the reports from the rep_template_hrd table

select rep_name from rep_template_hdr where rep_template_Id in (17879,25749)

This could be done in one statement like:

select rep_name, rep_template_id from REP_TEMPLATE_HDR where rep_template_id in
(
select distinct rep_template_id from REP_TEMPLATE_COLUMNS
where field_name = 'triRequestedByTX'
and table_name = 'M_TRIREQUEST'
)

Now, open TRIRIGA, go into report manager, click on “open” for any report. Press CTRL-N to open a new window, where you can edit the URL. Replace the report template id in the URL with the one provided above and the offending report will open.

Once you find the offending report you may want to reconsider how you have your filters set up, or you may want to remove it from a portal section or frequently used object. An alternative to removing it is to just remove it from the first tab of an object so it does not run as often.

Wednesday, January 9, 2008

WF_STEP_INST Table

The WF_STEP_INST table in the tririga schema basically holds the structure information for the actual workflow instances, i.e. all of the tasks and how they relate for all of the running workflows. This data will clean at runtime if you are in production mode. You shouldn't ever have too much data in this table as it should only contain the data for:

1) active workflows
2) workflows with pending tasks (e.g. user tasks)

Unless you have tons of workflows with user tasks that are leaving the workflows active (and hence the instance data in the system) or have a lot of orphaned data, this table shouldn't have too many rows. Removing the check next to Save Instance Data in workflow start tasks reduces the number of rows inserted (and eventually deleted) from this table. This can be important when doing large data loads, as I have seen delete operations against this table become a bottleneck when the system is processing large quantities of workflows after a data load.

Monday, January 7, 2008

Tracking Data Integrator Loads

This SQL script (run against the tridata schema) will provide all Data Integrator loads, the status of the load, and which user initiated the upload:

select ibs.created_date "STARTED_DATE",
ibs.updated_date "FINISHED_DATE",
ibs.object_state "STATUS",
mp.sys_useraccount "USER_ACCOUNT",
ibs.spec_name "UPLOAD_NAME"
from ibs_spec ibs, t_myprofile mp
where ibs.spec_template_id = 106548 and
ibs.created_by = mp.spec_id
order by 1 desc;

Friday, January 4, 2008

Scripts to Identify Object Changes

The scripts below show changes made to to various types of objects within the last day. These can be modified to show a date range or release date.


-- Business Objects (run in tridata schema)
select Updated_Date, s.name
from ibs_spec_type s
where updated_date > SYSDATE -1
order by s.name;

--Workflows (run in tririga schema)
select Updated_Date, data_value Module, wf_name
from wf_lookup w, list_value m
where updated_date > SYSDATE -1
and m.LIST_ID = 2
and m.list_value_id=w.module_id
order by m.data_value, w.wf_name;

--GUIs (run in tririga schema)
select Updated_Date, data_value Module, g.gui_name
from gui_header_publ g, list_value m
where updated_date > SYSDATE -1
and m.LIST_ID = 2
and m.list_value_id=g.spec_class_type
order by m.data_value, g.gui_name;

--Queries (run in tririga schema)
select Updated_Date, data_value Module, r.rep_name
from rep_template_hdr r, list_value m
where updated_date > SYSDATE -1
and m.LIST_ID = 2
and m.list_value_id=r.class_type_id
order by m.data_value, r.rep_name;

--Managers which have been modified
select Updated_Date, name Manager_Name from ibs_manager_description
where updated_date > SYSDATE -1
order by name;

--Portals (run against tririga schema)
select distinct Updated_Date, portal_name from user_portal
where updated_date > SYSDATE -1
order by portal_name;

--Portal Sections (run against tririga schema)
select updated_date, portal_section_name from portal_section
where updated_date > SYSDATE -1
order by portal_section_name;

Diplaying UOM Values on Form Reports

If you are creating a Form report that includes a Number field and want to display the UOM value of the Number field in the form, please use the method shown below. When a number field with a UOM is created, the system automatically creates a new field and suffixes the _UOM to the Number field name. This UOM filed is not listed in the GUI and so when you export the Report Header there is no way to include the _UOM filed. You have to use this method:

Thursday, January 3, 2008

System Usage Script

If you would like to know who has logged into your TRIRIGA system within the past 30 days and when, here is a script you can run against the tririga schema. You can update this script to filter for specific users or filter specific users out (such as data integration users).

SELECT DISTINCT
user_credentials.user_account AS USERNAME,
session_history.start_time AS LOGIN
FROM
user_credentials, session_history
WHERE
user_credentials.user_id = session_history.user_id AND
session_history.start_time > (sysdate - 30)
UNION ALL
SELECT DISTINCT
user_credentials.user_account AS USERNAME,
active_session_user.start_time AS LOGIN
FROM
active_session_user, user_credentials
WHERE
user_credentials.user_id = active_session_user.user_id
ORDER BY 1, 2;

External Redirect

If you would like to hyperlink from an external web page to a specific form within TRIRIGA, use the format shown below. For example, you may want to have a link from your corporate intranet directly into TRIRIGA 9 the pops up a blank request form. This works very nicely in conjunction with single sign on.

To get the URL to the form you would like to use simply go the the form in the application (i.e., open a new request), push the F11 key on your keyboard and copy the URL to your clipboard. It will look like the first example below. Then all you need to do is insert http://your_servername/html/en/default/admin/extRedirect.jsp?url= before the /WebProcess.srv?.... portion of the URL.

Instead of this:

http://your_servername/WebProcess.srv?objectId=750000&actionId=750011&propertyId=208133&projectId=1&specClassType=13&specTypeId=10001791&managerType=list&guiId=10001791

Link to this:

http://your_servername/html/en/default/admin/extRedirect.jsp?url=/WebProcess.srv?objectId=750000&actionId=750011&propertyId=208133&projectId=1&specClassType=13&specTypeId=10001791&managerType=list&guiId=10001791

Wednesday, January 2, 2008

Hosting HTML Files Locally

When developing in TRIRIGA you occasionally come across the need to point to an external html file from within the application. For example, you may want to create a detailed help screen in html and link to if from a section action within the application. Instead of posting that html page to another web server, you can host it directly on your TRIRIGA application server.

All you have to do is copy your html file to the /userfiles/company-1 directory and set your URL to look like this:
/getCompanyFile.jsp?fileLoc=//Company-1/myfile.htm

Another Method for Data Cleanup

This post was removed at the request of TRIRIGA....sorry

Data Cleanup Tool

Here is the link to the newest data cleanup tool for TRIRIGA:

http://ServerName:8001/html/en/default/datadictionary/dataCleanupUtility.jsp

Be very careful using this tool as it can cause some serious damage to your system. Here are a few comments from internal TRIRIGA sources:

“I would DEFINITELY make an export (backup) before running these scripts. They aren’t supported by Platform, so use AS IS (and I wouldn’t give them to a customer without permission).”

“Ok, so the cat's out of the bag so to speak, but please please please do not consider these clean scripts ready to be used without a thorough understanding of the scripts and the results they will have on your system. I encourage you to please contact a PA/TA to walk through these before you employ them. To xxxx's comment, obviously do not share with clients without explicit agreement from TS leadership and the PM on the project. These will delete pretty much all instance data in your system...please tread lightly.”

"Rumor has it that QA plans to support and document this utility….eventually. But, no word yet on when that may happen. So, unfortunately, everyone is on their own. You might try calling tech support and asking about it, but you’ll need to get past the first level tech support people….who will probably refuse to even talk to you about it."

Get a Count of all Objects in TRIRGA 9

The script below can be used to get a list of all objects in your TRIRIGA database, showing the objects with the greatest record counts first. This can be useful to identify which records are consuming all of the space in your database, especially ones that have no business value for you.
This script should be run against your tridata schema.


SELECT
spec_template_id,
type_name,
COUNT(1) COUNT
FROM
ibs_spec
GROUP BY spec_template_id, type_name
ORDER BY 3 DESC