Monday, March 30, 2009

PeopleSoft Performance Tuning

1. Introduction:

It is a widely known fact that 80% of performance problems are a direct result of the poor performance, such as server configuration, resource contention. Assuming you have tuned your servers and followed the guidelines for your database server, application server, and web server, most of your performance problems can be addressed by tuning the PeopleSoft Application.
This article presents methodologies and techniques for optimizing the performance of PeopleSoft applications. The methodologies that are discussed are intended to provide useful tips that will help to better tune your PeopleSoft applications. These tips focus on tuning several different aspects within a PeopleSoft environment ranging from servers to indexes. You will find some of these tips provide you with a significant improvement in performance while others may not apply to your environment.
2. Server Performance:

In general, the approach to application tuning starts by examining the consumption of resources. The entire system needs to be monitored to analyze resource consumption on an individual component basis and as a whole.
The key to tuning servers in a PeopleSoft environment is to implement a methodology to accurately capture as much information as possible without utilizing critical resources needed to serve the end-users.
Traditional tools used to measure utilizations impact the system being measured and ultimately the end-user experience. Commands like the following provide snapshot data but not without an associated cost. These tools can consume a significant amount of resources so care should be taken when executing them.
a) df size

b) iostat swapinfo

c) ipcs timex

d) netstat top

f) ps uptime

g) sar vmstat

h) swapinfo also glance & gpm
The goal of using these native commands is to identify, if and where, a bottleneck is in the server. Is the problem in the CPU, I/O or memory? These native tools provide indicators, but at the same time could skew the results because of the overhead associated with them. Typically, additional third party tools are needed to complete the analysis.
The last hurdle being faced in tuning the server is making timing decisions on when to upgrade the hardware itself. To do this, much more information needs to be collected and stored in order to understand if an historical spike in resource utilization was a one-time aberration or a regular occurrence building over time. The recommendation is to look at third party vendors for solutions that can collect key performance indicators while minimizing overhead on the system. The collected data can then be put in a repository for detailed historical analysis.
3. Web Server Performance:

The release of PeopleSoft Pure Internet Architecture™ introduces new components to PeopleSoft architecture—the web server and application server. The application server is where most shops struggle with appropriate sizing. Web servers are used for handling the end-user requests from a web browser to eliminate the administrative costs associated with loading software (fat clients) on individual desktops. The benefit is a significant savings on software deployment costs, maintenance, and upgrades. While the shift from fat clients to thin lessens the administrative burden, it increases the need to ensure the web servers are finely tuned since they will service a large number of clients. The requirement for these web servers to achieve optimal performance is vital due to the mission critical-nature PeopleSoft plays in today’s enterprise.
Recommendations for ensuring good performance for web servers:

• Ensure load balancing strategy is sound

• Implement a solution to verify and highlight changes in traffic volumes

• Closely monitor the response times to verify that the strategy is optimizing the web servers

• Measure and review historical patterns on server resource utilization (see server section above).

• Increase the HEAP size to 200, 250, 300, or 380 MB for the web logic startup script.
4. Tuxedo Performance Management:

Tuxedo is additional middleware PeopleSoft utilizes to manage the following Internet application server services:
• Component Processor—Responsible for executing PeopleSoft Components—the core PeopleSoft application business logic

• Business Interlink Processor— Responsible for managing the interactions with third-party systems

• Application Messaging Processor—Manages messages in a PeopleSoft system

• User Interface Generator—Generates the user interface based on the Component or Query definition and generates the appropriate markup language (HTML, WML, or XML) and scripting language (JavaScript, WMLScript) based on the client accessing the application

• Security Manager—Authenticates end-users and manages their system access privileges

• Query Processor—Executes queries using the PeopleSoft Query tool

• Application Engine—Executes PeopleSoft Application Engine processes

• Process Scheduler—Executes reports and batch processes and registers the reports in the Portal’s Content Registry

• SQL Access Manager—Manages all interaction with the relational DBMS via SQL
This Tuxedo middle tier is another critical and influential component of performance. Similar to the web server, what is needed is a way to see into the “black box” to further understand some of the key performance metrics.
Some of the performance metrics to capture when analyzing tuxedo are:

• Transaction volumes by domain, server, and application

• Response time for each end-user request

• Tuxedo service generating a poor performing SQL statement

• Break down of Tuxedo time by Service time and Queue time

• Identify problem origin – is it in tuxedo or the database?

• Response time comparisons for multiple Tuxedo Server
Reports has shown that too often companies throw hardware at a Tuxedo performance problem when a more effective solution can be as simple as adding another domain to the existing server(s). This is due to the fact that PeopleSoft and Tuxedo lack management solutions that provide historical views of performance.
5. Application Performance:

It is an accepted fact that 80% of application and database problems reside in the application code. But, there are other technical items to consider which could influence the applications performance. Here are some specific items to focus on when evaluating database environment:
• Make sure the database is sized and configured correctly

• Make sure that the hardware and O/S environments are set up correctly

• Verify that patch levels are current

• Fix common SQL errors

• Review documentation of known problems with PeopleSoft supplied code

• Be sure to check available patches from PeopleSoft that might address the problem

• Review PeopleSoft suggested kernel parameters

• Set up the right number of processes

• Review the application server blocking for Long Running Queries

• Make sure not to undersize version 8 application server

It is also recommended to continue to review these items on a periodic basis.
6. Database Performance:

The performance of an application depends on many factors. We will start with the overall general approach to tuning SQL statements. We will then move to such areas as indexes, performance monitoring, queries, the Tempdb (Tempdb is often referred to as plain “TEMP”), and, finally, servers and memory allocation.
To understand the effect of tuning, we must compare ‘time in Oracle’ with ‘request wait time’. Request wait time is the time that a session is connected to Oracle, but not issuing SQL statements. In Oracle time shows the amount of time resolving a SQL statement once it has been submitted to Oracle for execution. If time in Oracle is not significantly smaller than the request wait time, then application tuning should be examined. Request wait time is almost always much greater than in Oracle time, especially for on line users, because of think time.
One exception to this is for a batch job that connects to Oracle and submits SQL statements, then processes the returned data. A greater ratio of request wait to Oracle could indicate a loop in the application outside of Oracle.
This should be identified and eliminated before continuing the performance analysis.
The next step focuses on tuning the SQL statements that use the most resources. To find the most resource consuming SQL statements, the scheduled collection approach can be used. The duration time is a commonly used criteria to locate the offensive SQL statements. Other useful criteria include the following wait states: I/O, row lock, table lock, shared pool, buffer, rollback segment, redo log buffer, internal lock, log switch and clear, background process, CPU, memory and I/O. For each offensive SQL statement, the execution plan and database statistics are analyzed. The following statistics are important: table and column selectivity, index clustering factor, and storage parameters. First, all the joins of the SQL are considered. For each join, the ordering of the tables is analyzed. It is of major importance to have the most selective filter condition for the driving table. Then, the type of the join is considered. If the join
Represents a Nested Loop, forcing it into a hash join can be advantageous under some conditions.
The analysis stage usually results in several modification proposals, which are applied and tested in sequence. Corrective actions include database object changes and SQL changes. The typical database object changes are: index change, index rebuild and table reorganization.
The typical SQL changes are: replacing subquery with a join, splitting a SQL into multiple SQLs, and inserting Oracle hints to direct the Optimizer to the right execution plan.
7. Indexes:

Tuning indexes is another important factor in improving performance in a PeopleSoft environment. Index maintenance is crucial to maintaining good database performance. Statistics about data distribution are maintained in each index. These statistics are used by the optimizer to decide which, if any, indexes to use. The statistics must also be maintained so that the optimizer can continue to make good decisions. Thus, procedures should be setup to update the statistics as often as is practical.
Keep in mind that objects that do not change, do not need to have their statistics created again. If the object has not changed, the stats will be the same. In this case, recreating the same statistics over again will waste resources.
Since PeopleSoft uses a lot of temp tables that are loaded and then deleted, but not dropped, it is helpful to create the statistics when those tables are full of data. If the statistics are created when the table is empty, the stats will reflect that fact. The Optimizer will not have correct information when it chooses an access path.
Periodically, indexes should be rebuilt to counter index fragmentation. An index creation script can be created via PeopleTools to drop and rebuild indexes. This procedure will eliminate index -wasted space on blocks that are created as a result of Oracle logical deletes. This is only necessary on tables that are changed often (inserts, updates or deletions).
Index scheme is also important to look at. The indexes in a standard PeopleSoft installation may not be the most efficient ones for all installations. Closely examine data’s pattern, distribution, and modify the indexes accordingly. For example, the index on PS_VOUCHER (BUSINESS_UNIT, VOUCHER_ID) could be changed to (VOUCHER_ID, BUSINESS_UNIT) for an implementation with only a few business units. Use ISQLW Query Options (Show Query Plan and Show Stats I/O) to determine the effectiveness of new indexes. However, be careful to thoroughly test the new index scheme to find all of its ramifications.
8. Queries:

It is a good idea to examine queries to try and fix a problem that is affecting the application. Query analyzer can be used to see optimizer plans of slow SQL statements. Choose “Query/Display Plan” to see a graphical representation of a query plan. Alternatively, by issuing a “set showplan_text on” and running the statement will get a textual representation of the plan, showing indexes used, the order in which the tables were used, etc.
When investigating queries, worktables created per second should also be addressed. If a large number of work tables being are created per second (i.e. hundreds per second), this means that a large amount of sorting is occurring. This may not be a serious a problem, especially if it does not correspond with a large amount of I/O.
However, performance could be improved by tuning the queries and indexes involved in the sorts and, ideally, this will eliminate some sorting.
Recommendations for ensuring good performance for Database servers:
- Avoid using Boolean operators >, <, >=, <=, is null, is not null
- Avoid using Not in,! =
- Avoid using Like '%pattern', not exists
- Avoid using Calculations on unindexed columns or (use union instead)
- Avoid using having (use a WHERE clause instead)
- Always use Enable aliases to prefix all columns
- Always use Place indexed columns higher in the WHERE clause
- Always use Use SQL Joins instead of using sub-queries
- Always use Make the table with the least number of rows the driving table by making it first in the FROM clause.
- Always Establish a tuning environment that reflects production database
- Always Establish performance expectations before begining
- Always Design and develop with performance in mind
- Create Indexes to support selective WHERE clauses and join conditions
- Use concatenated indexes where appropriate
- Pick the best join method
- Nested loops joins are best for indexed joins of subsets
- Hash joins are usually the best choice for "big" joins
- Pick the best join order
- Pick the best "driving" table
- Eliminate rows as early as possible in the join order
- Use bind variables. Bind variables are key to application scalability
- Use Oracle hints where appropriate
- Compare performance between alternative syntax for SQL statement
- Consider utilizing PL/SQL to overcome difficult SQL tuning issues
- Consider using third party tools to make the job of SQL tuning easier
- Use of Bind Variables:
The number of compiles can be reduced to one per multiple executions of the same SQL statement by constructing the statement with bind variables instead of literals.
- Application Engine - Reuse Flag:

Application Engine programs use bind variables in their SQL statements, but these variables are PeopleSoft specific. When a statement is passed to the database, PeopleSoft Application Engine sends the statement with literal values. The only way to tell the Application Engine program to send the bind variables is by activating the ReUse flag in the Application Engine step containing the statement that needs to use the bind variable.
9. TEMPDB

To ensure that the application is performing at peak efficiency, it is important to look at the tempdb. The tempdb is used for sorting result sets, either because of an ‘order by’ clause in a query or to organize pre-result sets needed to execute a given query plan. If tempdb is being used extensively (evidenced by many work tables being created per second or heavy I/O to tempdb files), performance can be improved by tuning it.
First, consider moving the tempdb to its own set of disks. Do this with ‘alter database’ using the ‘modify file’ option to specify a new location for tempdb’s data file and log file. It may also be worthwhile to increase the SIZE option to a larger value, such as 100MB and increase the FILEGROWTH option to around 50MB.
Another option to consider is adding several data files to tempdb rather than having just one. This will help reduce contention on the tempdb. Do this by using ‘alter database’ using the ‘add file’ option. As with tempdb’s original data file, increase the SIZE option to a larger value as well as the FILEGROWTH option.
10. Servers and Memory Allocation:

The use of an application server is strongly recommended for all on-line connections. The application server queues incoming requests and dramatically reduces process blocking in the database. This will not help batch processes, but it will greatly increase the number of on-line users.
Collecting CPU wait, memory wait, and I/O wait may show the application is having to wait on server resources. Typically, this indicates an undersized server or that other applications running on the server were hogging resources. Today, many IT organizations are looking at server consolidation to reduce the cost of ownership. Taking this approach puts you in a position to analyze performance over time as an aid to a server consolidation effort.
While it is possible to share a PeopleSoft database server with other applications – database or otherwise – it is always preferable to dedicate the entire server to the PeopleSoft installation. Any process running on the server may use resources that could be better utilized by the database engine or a PeopleSoft process running on the database server. Use of the database server as a file server can seriously degrade the database response time, thus it is important to dedicate the entire server to PeopleSoft processes. The single greatest determinant of database server performance is the amount of memory allocated to it. The memory configuration parameter is expressed in 2K blocks. Thus, if you wanted to allocate 100 MB you would set the memory to 51200.
Generally, the more memory allocated to a server the better it will perform. The goal is to add enough memory to increase performance, but not so much that it no longer helps.
This determination can be made through the NT Performance monitor. Monitor the Cache Hit Ratio and disk usage via the Performance monitor to determine if more memory should be allocated to the database engine. For the most part, it is better to have too much memory allocated than not enough with database servers. For Application servers, additional memory usually helps but having too much could have a negative effect since more memory correlates to longer search times when there is an operation frequently looking for an object in memory.
11. Conclusion:

The presented methods are intended as tips to help better tune PeopleSoft applications. These tips are simply suggestions, as mentioned earlier, and they need to be used with caution as each tip may not apply directly to situation. However, if used properly, the suggested tips can tune PeopleSoft applications to perform at an optimal level.
There are many native tools available to monitor the various components that make up the PeopleSoft landscape. How effective they are in identifying the root cause of performance problems is still in question. The ultimate goal is to find a single solution to gain visibility end-to-end and in between.
It is a widely known fact that 80% of performance problems are a direct result of the application code. There are other factors that contribute to poor performance, such as server configuration, resource contention. Assuming you have tuned the servers and followed the guidelines for the database server, application server, and web server, most of the performance problems can be addressed by tuning the PeopleSoft Application code. Tuning the application can consist of tuning, PeopleCode, SQR code, SQL-intensive code, queries, nVision, and indexes.
Database Tuning:

Ineffective Indexing One of the most common performance problems in the PeopleSoft Application is ineffective indexing against key application tables. As we stated earlier, the PeopleSoft software is delivered with a generic code set that runs on several database platforms. In addition to the code set, the indexes that exist are not specific to any one environment. Because of this, one needs to fine-tune the application by selectively finding poor performing applications and determining whether or not the cause is due to ineffective indexing. This can be achieved by tracing the SQL of poor performing pages, application engine programs, COBOL, or sqr programs and finding the long running queries. Once you find the problematic queries that take a significant amount of time to complete, you will need to analyze the indexes that are being used. Here is an example of how to fine-tune your indexes. The Journal Generator application, within the Financials software, could be a COBOL application (FSPGJGEN) that performs very many selects based on the run control id parameters. In running this process it is determined that it is taking approximately 2 hours to process only 50 Journals. The first thing to do is to turn on tracing for that specific process and re-running the process in the test environment. Be sure that you always do your tuning in your test environment. You do not want to blindly start adding indexes to your production environment without performing full regression testing. The results can be catastrophic. Once you have the trace file, you can examine it and look for the timings for the long running queries.
After examining the trace file we find the SQL statement that is causing the performance problem. Once you find the SQL statement, you can run it through your RDBMS query tool to determine which indexes are being used. If you are using SQL Server, you will issue the following command: SET SHOWPLAN_ALL { ON | OFF } If you are using Oracle you will utilize the explain plan. Once you execute this command, you can then run your select statement. This returns detailed information about how the statements are executed and provides estimates of the resource requirements for the statements, including the indexes that are being utilized. The next step is to look at the columns in the where clause of the SQL statement and determine if the indexes being used, if any, contain these columns. If they do not, you can simply create a new index with the missing columns. Once created re-run your query to re-examine the index usage. Simply repeat this process until you achieve the improved performance. In some cases, certain SQL statements will never even use an Index. This is what is called a full table scan.
Full table scans are extremely taxing on the system and cause major performance degradation. If you determine that a SQL query is performing a full table scan, simply create an Index or Indexes with the columns that are contained within the where clause. Tuning and adding indexes is one of the most overlooked and very simple ways to improve performance. Just remember the following steps. - Trace - Examine the SQL - Analyze the SQL in your RDBMS tool - Determine Indexes being used - Create Indexes with Columns in Where clause - Re-Analyze the SQL and repeat until you get improved results Another tip for tuning indexes is to try re-ordering columns within the index. You can sometimes gain huge performance improvements, by simply changing the order of the columns when you create the index. This is a trial and error method that you will have to test. There is no hard and fast rule for which column should be placed in what order. Temporary Tables PeopleSoft utilizes temporary tables in many of its application programs, especially application engine programs. These application programs are constantly populated with data and deleted, over and over.
Each time a temporary table is populated and deleted, it causes certain databases like Oracle to leave the High Water Mark and produces full table scans. For example, an application engine program can insert 200000 rows and then delete them. The next time that application runs, it only inserts 2000 rows, yet a read against that table performs poorly. Additionally, the indexes that exist on these temporary tables are heavily fragmented from all of the deletes. Temporary tables are a common cause of performance problems. In order to prevent fragmentation and improve performance on most used temporary tables, you should truncate these tables on a regular basis.

Tuning Your PeopleSoft Apps: Indexes and Temp Tables
It is a widely known fact that 80% of performance problems are a direct result of the application code. There are other factors that contribute to poor performance, such as server configuration, resource contention, and other issues that we have described in previous chapters. Assuming you have tuned your servers and followed the guidelines for your database server, application server, and web server, most of your performance problems can be addressed by tuning the PeopleSoft Application code.

Tuning the application can consist of tuning, PeopleCode, SQR code, SQL-intensive code, queries, nVision, and indexes. In This Article, we will focus on Indexing and Temporary Tables

Ineffective Indexing

One of the most common performance problems in the PeopleSoft Application is ineffective indexing against key application tables. As we stated earlier, the PeopleSoft software is delivered with a generic code set that runs on several database platforms. In addition to the code set, the indexes that exist are not specific to any one environment. Because of this, you need to fine-tune your application by selectively finding poor performing applications and determining whether or not the cause is due to ineffective indexing. This can be achieved by tracing the SQL of poor performing pages, application engine programs, COBOL, or sqr programs and finding the long running queries. Once you find the problematic queries that take a significant amount of time to complete, you will need to analyze the indexes that are being used.

Here is an example of how to fine-tune your indexes. The Journal Generator application, within the Financials software, could be a COBOL application (FSPGJGEN) that performs very many selects based on the run control id parameters. In running this process it is determined that it is taking approximately 2 hours to process only 50 Journals.

The first thing to do is to turn on tracing for that specific process and re-running the process in your test environment. Be sure that you always do your tuning in your test environment. You do not want to blindly start adding indexes to your production environment without performing full regression testing. The results can be catastrophic. Once you have the trace file, you can examine it and look for the timings for the long running queries.

After examining the trace file we find the SQL statement that is causing the performance problem. Once you find the SQL statement, you can run it through your RDBMS query tool to determine which indexes are being used. If you are using SQL Server, you will issue the following command:

SET SHOWPLAN_ALL { ON | OFF }

If you are using Oracle you will utilize the explain plan. Once you execute this command, you can then run your select statement. This returns detailed information about how the statements are executed and provides estimates of the resource requirements for the statements, including the indexes that are being utilized.

The next step is to look at the columns in the where clause of the SQL statement and determine if the indexes being used, if any, contain these columns. If they do not, you can simply create a new index with the missing columns. Once created re-run your query to re-examine the index usage. Simply repeat this process until you achieve the improved performance.

In some cases, certain SQL statements will never even use an Index. This is what is called a full table scan. Full table scans are extremely taxing on the system and cause major performance degradation. If you determine that a SQL query is performing a full table scan, simply create an Index or Indexes with the columns that are contained within the where clause.

Tuning and adding indexes is one of the most overlooked and very simple ways to improve performance. Just remember the following steps.

- Trace
- Examine the SQL
- Analyze the SQL in your RDBMS tool
- Determine Indexes being used
- Create Indexes with Columns in Where clause
- Re-Analyze the SQL and repeat until you get improved results

Another tip for tuning indexes is to try re-ordering columns within the index. You can sometimes gain huge performance improvements, by simply changing the order of the columns when you create the index. This is a trial and error method that you will have to test. There is no hard and fast rule for which column should be placed in what order.

Temporary Tables

PeopleSoft utilizes temporary tables in many of its application programs, especially application engine programs. These application programs are constantly populated with data and deleted, over and over. Each time a temporary table is populated and deleted, it causes certain databases like Oracle to leave the High Water Mark and produces full table scans.

For example, an application engine program can insert 200000 rows and then delete them. The next time that application runs, it only inserts 2000 rows, yet a read against that table performs poorly. Additionally, the indexes that exist on these temporary tables are heavily fragmented from all of the deletes. Temporary tables are a common cause of performance problems.

In order to prevent fragmentation and improve performance on most used temporary tables, you should truncate these tables on a regular basis.


Testing PeopleSoft Performance
Even with a modern packaged application like PeopleSoft, it's possible (and actually quite likely) that performance problems exist and won't show up until after go-live when all of your production users are actively hitting the application.

Why? The reasons have less to do with poorly written software (PS/Oracle usually writes relatively efficient code), and more to do with the complexities of the application architecture. In a typical implementation, IBM HTTP Server handles all HTML traffic and hands off the work to IBM WebSphere. WebSphere manages sessions and communicates with the BEA Tuxedo Application Server which handles the business logic layer and communicates with the database server. The Operating System(s), system hardware, SAN's, load balancers, clustering software, network, etc., all play a part in performance.

Each component has its own tuning parameters and if any one of these components are not tuned appropriately performance suffers across the board. That's why it's a good idea to test your performance.

Fortunately, performance testing doesn't have to be a long, drawn out ordeal nor does it have to be expensive. It just requires a plan, a performance testing tool, the right people on the performance testing team, and a way to capture performance data for each component.

The performance testing plan should provide the framework and scope of the testing effort. It should outline the purpose of the testing, define basic terms like "Concurrent User", and include what tests will be conducted, what will be measured, and how you'll know if you're successful. I generally like to name team members in the plan along with contact information just so there's no confusion about what will be expected.

I've used Segue's Silk Performer in the past and it works very nicely for PeopleSoft performance testing. But if you don't have a commercial tool, Jakarta JMeter from the Apache Software Foundation (http://jakarta.apache.org/jmeter/) is an open source tool that has all of the features you'll need.

When you're writing the performance testing scripts, don't try to write a script for every system component. Keep it to just the most commonly used on-line activities. Also, don't try to test the "worst case". Performance testing tools can generate way more transactions than a person could ever enter. Be sure your performance testing tool at its peak won't generate more transactions than you'll find on a busy day in the database. A few quick queries should show if you're in the ballpark. I have heard that PeopleSoft benchmark reports define a concurrent user for Financials as someone who enters one transaction every 5 minutes. In my opinion, that's one very busy person.

Assembling the performance testing team is generally a matter of talking with the folks that already support your application. In addition to your PeopleSoft Application Administrator, you'll want to have a System Administrator to monitor the OS and the Kernel; a DBA to monitor the database; and a Network Engineer to monitor the network. If you feel like you're weak in a particular area, go find someone in your organization that can help. Or if you think you can cover multiple areas, the team can be smaller.

Capturing performance data doesn't have to be a huge deal either. If you're running on UNIX or Linux, it's easy to redirect VMSTAT or SAR output to a file during the test for memory and CPU info. Windows provides a way to capture performance data to a file as well. I have some scripts that use TMADMIN to track # of users logged on, # of Tuxedo request, and whether or not queuing is happening. I generally use WebLogic console to watch threads and JVM heap info as the test is running. On the database, TKPROF is a great way to measure many things in an Oracle database, and SQL Profiler has a lot of functionality for SQL Server.

I generally divide my tests up so that we start with 1/3 of the target load, then 2/3rds for the next test, and finally the full load. While each test is running, I have the performance testing team on a conference call so we can discuss what's happening in real time. I've found that a 20-minute test seems to provide good information, but I don't mind cutting it short especially if problems become obvious. While the test is running, I like to log on to the system and navigate around, just to get a human's opinion of performance. I keep a performance testing narrative document up to date with the "play-by-play" info which always includes at least the time the test started, what memory, CPU, threads and JVM Heap were doing at various times in the test. This narrative also helps me make sense of all of the data the team members will be sending at the end of the test. The narrative also includes any changes that were made between tests so we can tell what worked, and what didn't.

Once we're sure the system will work at the required load, and if time allows, I like to do a stress test to find exactly at what load the system will break down. This gives an idea of the overall system capacity which is a great statistic to know for planning purposes down the road.

Anyway, that's more or less how I approach performance testing. If you have any other ideas or approaches, please share.

Thursday, March 12, 2009

PeopleSoft Security by Bala

1. Object level security:
In object level security we define security to PeopleSoft objects like Pages, People tools, Web libraries, Processes, and Component Interfaces.
2. Data security:
In data security we define restrictions on data accessed by a user. Here we see Department level security and PS Query security intern called as Row-level security.

• In PeopleSoft everything is stored in tables. For storing security related information also we have security tables at back-end e.g. PSOPRDEFN, PSROLEUSER, PSROLECLASS, PSAUTHITEM.
• If any user is trying to login into PeopleSoft application, applications server takes all security related information from the tables and it enforces restrictions based on the security that a user has.
Object level security:
The security flow for object level security is
- We define object level security at permission list i.e. create a permission list and assign access to whatever the objects you want, like pages, process groups, component interfaces, web libraries etc.
- Then create roles, assign permission lists to roles.
- Create a user profile and then assign roles to users
Question: Why do we need permission lists, roles why can we assign permissions on objects directly to user profile? What is the use of permission lists and roles?
Following snap shots are for your reference



In above slide you see we have assigned access on different pages to permission list AEPNLS.
Data security:
The concept data security is defined wonderfully in PeopleSoft.
Data level security means restricting a user form some rows of data in table.
E.g. We have a table JOB having 10 rows of data by using Roe-level security you can restrict a user to see only 5 rows based on the security that he/she has.
In PS Query the above example can be done with views.
At component level it is done with Department security.

Integration Broker Basics

I don't consider myself an Integration Broker expert. Far from it. I can get messages to fly from one PeopleSoft instance to another, I can write a transformation if pressed, and I can get messages that are stuck in a "New" status working again. So I thought I'd go out on a limb here and pass along my mental model of how I visualize Integration Broker working. If you're an expert, you might find this article simplistic and inaccurate at times and I'm hoping you'll speak up and let me know of the glaring problems. But if you're a new to IB or trying to support somebody else's integrations you might find something in here useful.
If you think about it, any messaging system needs some basic building blocks to work. Think about an e-mail system. You have a Sender and a Recipient. E-mail addresses are made up of a user (the part before the @ sign) and a domain (the part after the @ sign). You have an e-mail server that routes the e-mail to the Recipient’s domain. A server on the Recipient’s domain then gets it to the recipient’s inbox. When the Recipient opens it, a Return Receipt e-mail might be generated back to the Sender as an acknowledgement.Integration Broker follows a lot of these same rules. But since its messages are generated and consumed by machines, things are more structured.
Building Blocks of Integration Broker
Nodes
If you think about the e-mail analogy, the Node would be like the Domain part of the e-mail addresses. For PeopleSoft-to-PeopleSoft communication, Nodes are (usually) PSFT_EP for Financials, PSFT_HR for HRMS, PSFT_LM for LMS, and PSFT_CRM for CRM. They basically tell which application a message belongs to.The node definition is where you define what messages are valid for that node. Prior to PeopleTools 8.48, you’d define them on the “Transaction” tab. In 8.48 and above, you’d define them on the “Routings” tab.Since you might not want just anybody being able to publish a message to a node, you’ll need to set a node password on the first page of the node definition. This password will have to be the same in all of the environments. For example, if you want to publish a message to PSFT_EP from HRMS, the PSFT_EP node password will have to be the same in both Financials and HRMS.
Messages
The message definition is where the developer specifies what data a message will contain. It includes records and fields, and child records are nested under their parent records.If you think of the e-mail analogy, the message name would be both the User part of the e-mail address, and the message itself would be the Body of the e-mail.
Transformations
A transformation is a program that gets executed against a message either when it’s sent, or when it’s received. If you think about it, this can be important because data structures are different with different PeopleSoft versions. So if you’re sending a message with employee data from HRMS 8.8 back to Financials version 8.4, there’s a good chance that the message that HRMS sends is different from what Financials is able to receive. So you need a transformation.The transformation itself is a special type of Application Engine program that integration broker can execute by itself against the XML of a message. It uses either PeopleCode, or XSLT (a special language for transforming XML) to put the message into the new format.In the e-mail analogy, this would be like me sending an e-mail to someone who doesn’t speak English. I'd either have to translate it before I sent it, or the recipient would have to translate it.Prior to PeopleTools 8.48, you use “Relationships” to associate a Transformation program to a message. In 8.48 and above, you can associate a Transformation to a Message with either a Service Operation or a Routing.
Gateways
The gateway is kind of like the e-mail server. It knows the nodes – that is to say for a given node it knows the server name, app server port number, username and password so that it can connect to the node’s app server and push the message to the integration broker running in that environment.The gateway runs as part of the PIA web server. Integration Broker sends messages to it with a plain ‘ol Post HTTP request. This makes talking to Integration Broker pretty easy for 3rd party applications since they don’t have to write any special protocols.
Asynchronous versus Synchronous
Integration Broker can do either Synchronous or Asynchronous messages. Synchronous messages are sent and the program waits for a successful response from the remote system before it will continue.Asynchronous messages are more like the E-mail analogy – the message is sent and the program gets on with its life, assuming the message will be OK.Most PeopleSoft EIP's are Asynchronous, and so I’ll only talk about Asynchronous messaging in this document.
Message Channels / Queues
PeopleSoft lets you group message definitions into queues. Queues can be paused or running. So if you want to keep messages with employee data from trying to go from HRMS to Financials when Financials is going to be down, you can pause the message queue. When the maintenance is over, you can set the queue to running.They have another normally unused feature: You can change how many messages get posted to the integration broker at one time by chunking on specific combinations of fields. Message Chunking is more of a developer topic, so that’s all I’ll say about it for now.Prior to PeopleTools 8.48, Queues were called Message Channels. I don’t believe there’s any real difference in what they are or what they do.
Service Operations
Service Operations were invented in PeopleTools 8.48. I believe the intention was to create a single place where you can define which nodes a message is valid for and what transformations need to be applied to it.
Steps in Integration Broker prior to Tools 8.48
1. PeopleCode event creates and publishes a message2. Integration Broker looks to see if the message channel for that message is active.3. Integration Broker creates a message instance for the message4. Integration Broker looks to see what nodes the message is active for5. Integration Broker creates a publication contract for each message node.6. Integration Broker looks to see if any “relationships” exist for the source node/target node/message/version combination, and executes the transformation associated with the relationship if it exists.7. For each publication contract, Integration Broker publishes the message (in XML format) to the integration gateway. This includes the Source and Target nodes.8. The Integration Gateway looks for the target node in its configuration file (integrationgateway.properties), connects to the application server, and passes the message off to the target integration broker.9. Integration broker creates a message instance for the message.10. Integration Broker looks to see if the message is set up as an inbound message on the source node.11. Integration Broker creates a subscription contract for the source node (if active).12. Integration Broker looks to see if any “relationships” exist for the source node/target node/message/version combination and executes the transformation program if applicable.13. Integration Broker inserts the message into the database based on the message definition.
Steps in Integration Broker 8.49
The process is basically the same, but the terminology has changed.1. PeopleCode event creates and publishes a message2. Integration Broker looks to see if the Queue for that message is active.3. Integration Broker creates a transaction for the message4. Integration Broker looks to see what nodes the message is active for5. Integration Broker creates a publication contract for each message node.6. Integration Broker looks to see if any transformations programs exist for the service operation routing or the node routing, and executes them if found.7. For each publication contract, Integration Broker publishes the message (in XML format) to the integration gateway. This includes the Source and Target nodes.8. The Integration Gateway looks for the target node in its configuration file (integrationgateway.properties), connects to the application server, verifies the node passwords from the source and the target environments match, and passes the message off to the target integration broker.9. Integration broker creates a message instance for the message.10. Integration Broker looks to see if the message is set up as an inbound message on the source node.11. Integration Broker creates a subscription contract for the source node (if active).12. Integration Broker looks to see if any transformation programs exist for the service operation routing or the node routing, and executes them if found.13. Integration Broker inserts the message into the database based on the message definition.
Integration Gateway Considerations
Integration Broker got an overhaul in PeopleTools 8.48, and the older PeopleTools versions are no longer compatible with the newer PeopleTools versions.So how can you actually make the old PeopleTools versions send and receive messages with the new PeopleTools versions? You have to make the older versions use the new Integration Gateway.So what the does that mean? Well, you have to do is go to PeopleTools > Integration Broker > Gateways, and select the LOCAL gateway. Change the URL to be the same as the environment with latest copy of PeopleTools’ LOCAL gateway URL.
Now if you want to change any Gateway configuration, be sure to do it from the latest PeopleTools environment. Its bad luck to edit Integration Gateway configuration using a version lower than the gateway is running. Also older tools versions won’t encrypt passwords like the new ones will.Also, if you shut down a shared Integration Gateway web server, it’s going to impact Integration Broker on all of the environments that share it. Messages should catch up whenever you bring the Integration Gateway web server back up, as long as you go to Message Monitor and resubmit the ones in error.
SummaryI'm hoping this clears up some of the compexities of Integration Broker. Please let me know if I didn't say something right or got something wrong or missed something that should have been covered.

Oracle Database Patching


Oracle corporation releases patches and patch sets(bundle of individual patches) periodically to address various software errors /bugs in the Oracle server software.Patches are fixes to the bugs in the oracle server software.It is mandatory to apply patches to operating systems to address various bugs.(OS patches are different from oracle database patches. Download OS vendor patches from vendor sites. Foe example to patch solaris OSdownload patch from http://www.sun.com/).If we are in constant communication with oracle technical support we'll be aware of patches. First question that a technical support asks is if the database has been upgraded by application of most recent patch relea/patch set release. The first recommendation is to apply latest patches/patch set to the oracle database.What is a Patch set/Oracle patch set?Each of the oracles patch setld cover fixes for literally hundreds of bugs. Its is recommended to apply a patch set as soon as it is available. One of the primary reasons for this is to see if your bug is unique to your database or if a general solution has already been found for the problem. When we ask oracle technical support to resolve a major problem caused by a bug ,Oracle usually provides us wth a workaround. Oracle recommends that we upgrade our database to the latest versions and patch sets because some Oracle bugs may not have any workarounds or fixes.Is it recommended to apply patches/patch set directly on the production systems?No. It is not advisable to apply the patches directly onto production systems. We don't like to jeoparadize the performance of our production system .The ideal solution is to maintain a test server where the new software(patch set) is tested thoroughly before being moved into production as early as possible.anges like applying aptches/patch set to a system.What are Critical Patch Updates?Critical Patch Updates are comprehensive patches that address significant security vulnerabilities and include fixes you can apply. They are prerequisite for security fixes.Metalink And Patch Updates :An important part of security management is keeping up with the latest news about secury vulnerabilities and the patches to overcome them.Oracle has a policy of quickly issuing fixes for new security problems. We should check for the latest security patches on the Oracle metalink website http://metalink.oracle.com/We can find regular security alerts at the location http://technet.oracle.com/deploy/security/alert.htmWe can find notes about the security breaches on the metalink site in the "News and Notes" section. If we wish Oracle will send e-mail security alerts about new issues. We can sign up for the free servieby registering a http://otn.oracle.com/deploy/security/alerts.htmOracle provides critical patch updates on quarterly schedule and customers are notified of these updates via meta link, the OTN security alerts page, the oracle security RSS feed. If you're already a Metalink subscriber, you are automatically sgned up for the Critical patch updates.What in case of an emergency bug?If a patch addresses a severe threat Oracle will not wait for the quarterly to send the patch to you. In such cases oracle will issue an unscheduled security alert through Metalink and will let you immediately download the patch.The patch will also be included in the next quartely critical patch update. These patches are called as interim patches and we can use opatch utility provided by oracle for this purpose.How frequently should I apply patch/patch my system?We can have a regular planned quterly schedule for patching our system. A single patch on a quarterly basis is better than a number of patches that need extensive testing and may conflict with each other.What is a risk matrix?Oracle has introduced a new Risk Matrix along with its quarterly Critical Patch Updates. the risk matrix enables customers to estimate the scope and severity of the vulnrabilities addressed by each Critical Patch Update. I tells us the threat we face to confidentiality, integrity and availability and the conditions under which our system is most exploitable. We can thus address the risk to our system and prioritize the patching on those systems.How do I apply an Oracle patch?Database control is a GUI that is bundled with oracle server software. We can download the patches from Metalink using the Database Control page. From Patching setup page(which is a tab option in Database Control page) we can enter our Metalink credentials to search for new patches and apply them.Downloading and Installing Patches:1) Check the OracleMetalink Web site for required patches for your installation. To download required patches.Use a Web browser to view the OracleMetalink Web site: http://metalink.oracle.com/2) Log in to OracleMetalink3) On the main OracleMetalink page, click Patches. Select Simple Search. Specify the following information, then click Go:In the Search By field, choose Product or Family, then specify RDBMS ServerIn the Release field, specify the current release numberIn the Patch Type field, specify Patchset/MinipackIn the Platform or Language field, select your platformWill Patch Application Affect System Performance?Application of certain patches could affect the performance of good SQL statements. So collect a set of performance statistics that can serve as a baseline before we make any major changes like applying a patch to the system.Patches in Grid Environment:We can use the Oracle Enterprise Manager(OEM) grid control to patch oracle and manage warnings about critical patches. The Grid control homepage has an overall view of the entire enterprise. This provides details on health of database. It has Critical Patch advisories which display a visual summary of any patch advisories and the affected oracle homes.Note :Patching refers to fixing bugs in Oracle server software,not the oracle database. Oracle database is the file structure for storing data. Any changes to that will affect data integrity.