Wish you a happy, healthy and prosperous 2017.
Selective adoption is one of the most talked about topic in PeopleSoft and most of you are already aware of its basics, so will not dwell much into it. Essentially,
- Application maintenance is now on continuous delivery, with new update images released every 8 – 12 weeks
- No need for major upgrades to get current, customers can selectively chose ‘what’ and ‘when’ they want to update
Below are some handy links for detailed information on selective adoption,
- Concept page from Oracle on Selective Adoption is a good starting point
- A Practical Approach to Optimize Your Selective Adoption
A key factor that determines the effectiveness of selective adoption process is the ‘selection procedure’ itself i.e. how customers go about selecting ‘what’ they need to apply. Typically an update image consists of new features, enhancements, statutory patches and bug fixes. While the major changes get highlighted in the ‘video feature overview’, full list of changes in a release image could run into thousands. If you are trying to catch-up with few update images, it could make a rather long list for manual review. Combing thru this list and effectively selecting the changes that add value to your business could be a cumbersome exercise.
In this article I have discussed a weighted scoring model to simplify this selection procedure. Objective here is to evaluate (score) the image contents against a set of priorities defined to reflect your selective adoption scope. This exercise assigns each change item with a cumulative score that indicates its suitability for selection, typically higher the score, more desirable the item is for selection.
For the illustration below, I have used a scoring model of 50 points and the cumulative score for each change item highlights its desirability for adoption on the scale below.
As part of an update image a full list of its contents is available for download from the PUM homepage in Oracle support site. This ‘Update Image Content list’ document is used as the source data for weighted scoring model. Location of this document for HCM 9.2 Image 20 is shown in the snapshot below,
A copy of content list document for HCM 9.2 Image 20 – Updates_Included_HCM92020
This list is in MS Excel format and is cumulative including all items from Image 1 to the latest image. Each individual content in the list is identified by a unique ‘Bug Id’ along with a brief description and other related information as listed below.
Following attributes (columns) in the content list document helps in qualifying a change and its suitability for selective adoption.
- Minimum PeopleTools version
- 8.55, 8.54 etc.
- Bug Type
- Bug fix, feature, sub-feature etc.
- Critical, Major, Minor etc.
- Product / Product component
- Human Resources, Global Payroll etc.
For instance, If a customer is only keen on taking high priority bug fixes for business critical modules then changes of type ‘Bug’ with severity of ‘Critical and Major’ in the selected modules would be of high interest to them.
In the weighted scoring model, these qualifying attributes are assigned a weighted score to reflect the priorities of the customer’s selective adoption scope, which is then used to evaluate the change items, there by highlighting the changes that need most attention.
Weighted Scoring Analysis
Attached here is a template document that can be used to perform weighted scoring analysis.
Document has following worksheets and explained below are the steps to perform weighted scoring analysis.
- Setup – Sheet to assign weighted scores to various evaluation attributes
- Image Contents – Target sheet to copy the content list to be evaluated
- Scorecard – Result sheet containing the evaluated score for individual contents
- Report – Sheet with analytics based on evaluated scores
Assigning weighted scores
This activity directly translates the selective adoption objectives at a specific customer site into a numeric score. For instance if the objective for a customer is to stay up to date on bug fixes, while there is not much appetite for new features, then the rating score for ‘Bugs’ should be high while that of ‘New feature’ should be low.
- A simple rating scale of 1 – 5 should be sufficient to evaluate various attributes, where needed multiply the rating scale with constant number to increase the weight of a specific factor
- Assign higher rating score for desirable selection criteria, so that a higher cumulative score indicate the item is desirable for selection
Below is an illustration of assigning weighted scores to various evaluation attributes.
Minimum Tools Version
This is a significant factor while selecting content for selective adoption as the current tools version at a customer site determines the ability to take up supported application features. If a tools patching or upgrade can be included in scope of the update exercise then features from higher tools version can be included else they need to be excluded, weighted scores for various tools version should reflect the customer’s ability to patch or upgrade to specific tools version.
Below example illustrate a customer who is on PT 8.53.10, so that version gets maximum score of 20, while next patch (8.53.15) gets slightly lower score and each higher tools version gets lower score indicating the difficulty of tools upgrade.
PS: In this example a rating scale of 1 – 5 is multiplied by a factor of 4, to provide higher weight to this evaluation factor in relation to others.
This is a straight forward yet significant attribute to define the scope of selective adoption.
Note: Sub-Features are typically enhancements to previously released features.
This is a crucial factor especially to qualify the content type ‘bug fixes’.
This factor which denotes the application module associated with a change, is a very useful in highlighting the significance (impact) of a specific content and the weighted score for this attribute can be derived based on a combination of factors as shown here.
- Module Importance – Score this factor to indicate the importance of specific module to customer’s business and the appetite for change in the respective business group
- Customization Impact -Score this factor to indicate how easy it is take up patches which is inversely proportional to the amount and quality of customization in the specific module
- Test Repository – Score this factor to indicate the level of test scripts available for a specific module
- Users Impacted – Score this factor to indicate the usage of this module across the organization. Note: Modules like payroll though limited by number of users, indirectly impact the whole organization, so should have higher scores.
Assigning weighted score is typically an once-per-cycle activity, score has to be revised for each update cycle to reflect the scope of the specific cycle.
Loading Image Contents
As mentioned earlier ‘Update Image Contents Document’ in Oracle support is cumulative, so download the document and if needed filter on the column ‘Image Number’ to only include the items between specific image version required for a specific customer. For instance if the customer is already patched to Image 11, then filter to include only Images 12 – 20 for the next selective adoption review.
Copy the filtered contents from the source document to the weighted scorecard template document on to worksheet – ‘2. Image Contents’.
Reviewing Cumulative Scorecard
In the template document, once the weighted scores are setup and image content list is loaded for review, the result scorecard is available for review on worksheet ‘3.Scorecard’.
Column – ‘Cumulative Score’ contains the total score assigned to a content (Bug Id) based on the attributes we have evaluated. Scores are color coded based on range indicating its significance for selective adoption.
Higher cumulative score indicate the content is desirable for selective adoption as the change is of high impact (importance) and of low cost to implement. Filter on this column to review and finalize the scope of items for selective adoption.
Last worksheet in the template document contains a pivot analytics showing the spread of content items across the range of cumulative scores. This gives a broader view of the performance of various evaluation factors in either limiting or selecting a group of contents, which is directly related to your selective adoption scope.
This approach provides a shortened list of change items that can then be reviewed manually for adoption. Effectiveness of this approach solely relies on the accuracy of the weighted scores assigned to various evaluation factors and how well they reflect the selective adoption scope of the customer. Logic here is to objectively simplify the selection process there be reducing the time and effort of managing the selective adoption cycles. Feel free to download the scoring template and update it to suit your needs.