Demystifying IBM Cognos 8 Security


One of the most challenging aspects of establishing any enterprise business intelligence platform is the design and implementation of a security model that is flexible and extensible enough to meet the needs of your business while also remaining simple enough to administer and maintain. Every organization is unique in how they operate, so a structure that best suits one may not necessarily be appropriate for another. Because of these fundamental differences, there is unfortunately, no single archetype for business intelligence security. There are however a few proven guidelines which if followed, will help to ensure that whichever solution you arrive at will work in harmony with your IBM Cognos 8 environment.

Centralization

Unlike prior generations of enterprise business intelligence software, IBM Cognos 8 lacks any form of local or proprietary method of authentication.  These responsibilities have thankfully been off-loaded to the enterprise authentication provider of your choosing (LDAP, Active Directory, Siteminder, etc.).  Once authentication is performed by the 3rd party resource, group and role based authorization is applied from within Cognos 8. Out of the box, Cognos is configured with a standard set of application based groups and roles that assign capabilities to the various studios and other aspects of IBM Cognos 8 in a manner that aligns closely with the typical software licensing model.
At this point it is tempting to begin mapping users from your external directory directly to default roles within the Cognos namespace, and creating new groups and roles to organize users in ways that satisfy the requirements of your security model (by function, region, business unit, etc). If you have ventured down this path, then you can likely attest that while it may be a near term fix, it quickly becomes cumbersome to manage. The IBM Cognos directory lacks many of the creature comforts of the fully featured directory service management interface and therefore if your organization’s IT policies permit, you are almost always better served to manage group and role membership for your BI security model from within your directory service itself. Providers such as Microsoft’s Active Directory allow for delegation of management to specific users, and offer a management console simple enough for almost anyone to use. The key to this approach is understanding that moving your BI security model into your directory service does not equate to an increased burden on dedicated information security resources within your organization.
There are multiple benefits to this approach. As was mentioned, it completely centralizes the administration and maintenance of your BI security architecture to a single point, and it enables those tasks to be done in the most effective manner possible (natively within directory service itself).  If you operate multiple IBM Cognos 8 environments, you can quickly and easily mirror the same security model across all of them. You can also take this approach one step further and rather than map your custom groups and roles from your directory service directly into IBM Cognos 8, you can decouple your security model from the application by creating matching groups and roles within the built in namespace, and then performing one to one mapping from your directory to these place-holder objects. This technique, while adding some overhead, will make your IBM Cognos 8 environment agnostic to its underlying authentication provider and enable flexibility to more easily change the resource in the future if the need arises.

Consolidation

Even if you have selected a single primary authentication provider for IBM Cognos 8 and proceeded to centralize on this platform as described above, it is likely you may encounter additional heterogeneous security providers that exist only to handle authorization duties for specific data sources. For example, your data warehouse may have its own proprietary set of security tables, utilize built in security or Series 7 Cognos customers hoping to migrate their PowerCubes into IBM Cognos 8 might feel tethered to their legacy Access Manager Namespace.  The value of centralizing your security model for the ease of management quickly degrades when levied with the notion of making changes in triplicate across these additional security providers and the need to consolidate becomes very apparent.
C8 Security Model
IBM Cognos 8 can enable this sort of consolidation by utilizing features from various components of its application architecture. You can leverage your groups and roles from the IBM Cognos 8 namespace to apply data level security from within Framework Manager, or even pass a user’s single sign on credentials straight through to the database. In addition, the latest version of IBM Cognos 8 Transformer fully supports the use of the Cognos namespace for data level security within your cubes. To facilitate this process, the Ironside Group has developed a methodology and set of tools to help you through the process of migrating your old Access Manager Namespace to IBM Cognos 8.

Integration

Sometimes there isn’t an out-of-the-box, fully consolidated or centralized solution to suit the needs of your specific organization, but it doesn’t mean you should have to compromise on your requirements or change your usage pattern to seemingly meet the needs of your technology investment. Your business intelligence platform should conform to your business needs and not the contrary. Stay tuned for a future article that addresses a few of the more unorthodox security integration challenges that are likely to arise in the real world.

Determinants – The Answer to a Framework Manager Mystery


Determinants can play a crucial role in the overall performance and consistency of your Framework Manager model but remain one of the most confusing aspects of the application to most developers. This article will attempt to end the confusion.
Determinants are used so that a table of one grain (level of detail) behaves as if it were another actually stored at another grain. They are primarily used for dimension tables where fact tables join to dimension tables at more than one level in the dimension. (There are other cases where you could use them, but they are less common and fairly specific situations.)

The Situation

Let’s use the example of a date dimension table with day level grain. If all the fact tables join at the day level, the most detailed level, then you do not need determinants.  But as many of us know from experience, this is not always the case. Fact table are often aggregated or stored at different levels of granularity from a number of reasons.

The Problem

The trouble arises when you wish to join to the dimension table at a level that is not the lowest level. Consider a monthly forecast fact table which is at the month level of detail (1 row per month). A join to the month_id (e.g. 2009-12) would return 28 to 31 records (depending on the month) from the date dimension, and throw off the calculations. Determinants solve this problem.

The SQL

Often when modeling, it’s useful to think about the SQL code you would like to generate. Without determinants, the incorrect SQL code would look something like this.
SELECT
F.FORCAST_VALUE,
D.MONTH_ID,
D.MONTH_NAME
FROM SALES_FORECAST F INNER JOIN DATE_DIM D ON
F.MONTH_ID = D.MONTH_ID
This code will retrieve up to 31 records for each of the sales forecast records. Applying mathematical functions, for example Sum and Count, would produce an incorrect result. What you would like to generate is something along the following lines, which creates a single row per month, AND THEN join to the fact table.
SELECT
F.FORCAST_VALUE,
D1.MONTH_ID,
D1.MONTH_NAME
FROM SALES_FORECAST F INNER JOIN
( SELECT DISTINCT
D.MONTH_ID,
D.MONTH_NAME
FROM DATE_DIM D ) AS D1
ON F.MONTH_ID = D1.MONTH_ID
As shown  above, the trick is to understand which columns in the dimension table are related to the month_id, and therefore are unique along with the key value.  This is exactly what determinants do for you.

Unraveling the Mystery in Framework Manager

Following Cognos best practices, determinants should be specified at the layer in the model in which the joins are specified.
Here we see a date dimension with 4 levels in the dimension, Year, Quarter, Month and day level.
1
This means we can have up to 4 determinants defined in the query subject depending on the granularity of the fact tables present in your model.  The first three levels, Year, Quarter, Month, should be set to “group-by” as they do not define a unique row within the table and Framework Manager needs to be made aware that the values will need to be “Grouped” to this level. In other words, the SQL needs to “group by” a column or columns in order to uniquely identify a row for that level of detail (such as Month or Year).  The Day level (often called the leaf level) should be set to “Uniquely Identified”, as it does uniquely identify any row within the dimensional table. While there can be several levels of “group by” determinants, there is typically only one uniquely identified determinant, identified by the unique key of the table. The “uniquely identified” determinant by definition contains all the non-key columns as attributes, and is automatically set at table import time, if it can be determined.
The Key section identifies the column or columns which uniquely identify a level.  Ideally, this is one column, but in some cases may actually need to include more than one column.  For example, if your Year and Month values (1-12) are in separate columns.  In short, the key is whatever columns are necessary to uniquely identify that level.
Using our aforementioned table, the setup would look like this:
2
The Attributes section identifies all the other columns which are distinct at that level.  For example, at a month_id  (e.g. 2009-12) level , columns such as month name, month starting date, number of days in a month are all distinct at that level. And obviously items from a lower level, such as date or day-of-week, are not included at that level.
Technically, the order of the determinants does not imply levels in the dimension. However, columns used in a query are matched from the top down which can be very important to understanding the SQL that will be generated for your report. If your report uses Year, Quarter and Month, the query will group by the columns making up the Year-key, Quarter-key and Month-key. But if the report uses just Year and Month (and not the Quarter) then the group by will omit the Quarter-key.

How Many Levels Are Needed?

Do we need all 4 levels of determinants? Keep in mind that determinants are used to join to dimensions at levels higher than the leaf level of the dimension. In this case, we’re joining at the month level (via month_id). Unless there are additional joins at the year or quarter level, we do not strictly need to specify those determinants. Remember that year and quarter are uniquely defined by the month_id as well, and so should be included as attributes related to the month, as shown.
3

The Result

Following these simple steps the following SQL will be generated for your report. The highlighted section is generated by the determinant settings. Notice how it groups by the Month_ID, and uses the min function to guarantee uniqueness at that level.  (No, it doesn’t trust you enough to simply do a SELECT DISTINCT.)  The second level of group by is the normal report aggregation by report row.  So the result is that the join is done correctly, which each monthly fact record joined to 1 dimensional record at the appropriate level, to produce the correct values in the report.
4

IBM Cognos 8 Framework Manager Model Design


For those starting out with Cognos 8, or even those who have worked with it for quite a while, Framework Manager model design can seem like a daunting or even overwhelming task. With so many options available, it can be hard to know what to consider during design, and how to promote stability and sustainability.
This is the first in a two-part article that will focus on the best practices, organization and overall methodology of Framework model development. As is often the case with complex tools, even developers who have worked with Cognos for a while can be confused on at least some of the concepts and considerations of model design. The focus of this first article will start from the beginning, and describe an overall 4 layer approach, detailing the purpose and reason for each of the four layers. Next month’s article will describe advanced best practices, and other tips.
Four Layer Approach
One of the basic tenets for best practice model design is to segment the model into four specific sections or layers (Data, Logical, Presentation and Dimensional). Each layer has a specific function and set of modeling activities. Generally, the layers build upon one another, with the data layer being the foundation of the model. Other areas, such as packages and connections are also important, but fall outside of the layers. Many of the best practices can be thought of in terms of what development activities should or should not be performed in each of the layers.

Data Layer

The data layer, also called the import layer, contains the data source query subjects, based directly on the underlying database objects. Whenever possible, use unmodified SQL ( “select * fromTable Name”)to retrieve the table information. If you modify the SQL code or add a filter or a calculation to the data subject, it eliminates Framework Managers meta-data caching capabilities. This means that Cognos 8 will validate the query subject at report run time, adding overhead to the report load time. In some circumstances, this may be worth the trade-off, but should be avoided when possible.
Add joins, cardinality and determinants at this level. Cardinality, the definition of how joins should behave, is critical to a well developed model and can be confusing, even to seasoned veterans. In lieu of a lengthy discussion within this article, consider other IBM documents which offer a thorough discussion of cardinality, and the typical types of situations you may see.
Expect 20 to 40 percent of the model development to take place in the data layer.

Logical Layer

The logical layer adds the bulk of the meta-data to the model, and consequently is where most of the work is likely to occur. It provides business context and understanding to the data objects. Tasks include but are not limited to:
  • Organizing elements into logical query subjects. This may mean combining elements from multiple tables, as in a snowflake dimension.
  • Renaming element names, including descriptions and tooltips, and adding multiple language definitions if needed. Assign standardized and business-defined terms to the database columns, giving them business context.
  • Add calculations, and filters including stand-alone filters, embedded filters and security-based filters. Base these on the underlying data layer objects to make them less susceptible to errors when copying and reusing the query subject.
  • Arrange query subject items in a user-friendly manner. Make use of folders to group similar items, or when there are too many items. I suggest 12 or fewer objects per folder, although I have used more when the logical breakout calls for it. For example, if there are many dates in a query subject, it might be more intuitive to have all the dates in a single large folder, than to try to subdivide each into smaller, random folders. Arrange the contents of the folder in an intuitive manner, such as alphabetic, or with the most commonly used items at top. HINT: It can be useful to use the “Reorder” command available within FM for this purpose.

  • Assign output formats and usages to the reporting elements. This is easier if you create a small folder of trivial calculations, used only to provide standardized object format templates. The formats can easily be copied to target items by multi-selecting, and dragging the topmost format through the list.
  • Add prompts, including cascading prompts, and prompt-based calculations.
Roughly 50 to 70 percent of the modeling work occurs in this section.

Presentation Layer

The importance of making information easy for report writers to use is frequently underestimated, but is a critical component to driving user adoption. Fortunately, this can be a simple step. The presentation layer is used only as an organization structure in order to make it easier for report writers to more easily find their needed information. This layer includes only shortcuts to existing items in the logical layer, plus organization items such as folders and namespaces.
For example, create a namespace called Orders and include shortcuts to the ten or twenty relevant query subjects, out of perhaps a hundred or more query subjects in the logical layer. Also include shortcuts to relevant filters. Commonly used query subjects (such as Items or Customers) will appear in multiple areas. Rename the shortcuts to something which provides helpful business context.
For organizing major groupings, you can use either folders or namespaces. However be aware that namespace names must be unique, and that items within a namespace must likewise be unique. So, if you use folders for your major groupings, you cannot have a shortcut named “Items” in more than one folder. You must rename them to unique names, such as “Order Items”. For this reason in particular, I generally prefer to use separate namespaces.
The presentation layer takes approximately 10% of the overall model design effort.

Dimensional Layer

The dimensional layer is required only for models which include dimensionally modeled data. Leaving aside the trivial situations where cubes are simply imported into the model, this includes dimensionally modeled relational data (DMR).
Specifically, this is for creating Dimensional and Measure Dimension Query subjects. Much like the presentation layer, this layer also is built upon the logical layer, which leverages the effort put into that layer. Apply the element renaming, descriptions, tooltips in the logical layer, and they can be reused in the dimensional layer, with some help from the search tool.
Finally, note that when you are creating the final package, you should hide the data and logical layers so that the user will only see the presentation and/or dimensional layers.
above described the overall approach for best practice model design using the 4 layer architecture. In this month’s article, I’ll describe some advanced issues which arise during model design, and how best to deal with them. Given the limitless and complex situations that can come up during modeling, these techniques may or may not apply to your specific situation.

Data Sources

One area which many modelers neglect is the data sources. A key part of the data source is the Content Manager Data Source, which references a data source configured via the web interface in the Administration area of Cognos Connection. You should have as few of these as possible, ideally only 1 per database, unless there are very specific reasons (such as security). Even so, many security or logon issues can be addressed via multiple connections or signons within a data source.
Framework Manager will automatically create multiple data sources, (one per schema, for example), but they should reference the same Content Manager Data Source. The reason minimizing these is because multiple Content Manager data sources will open separate connections to the database and force Cognos to perform simple joins within the reporting engine instead of in the database where they belong.

Relationships and Cardinality

The area which causes much confusion and difficulties with modeling is relationships and cardinality. There are a number of documents which discuss the handling of data modeling traps, such as theGuidelines for Modeling Metadata document included in the documentation, and so I will not duplicate those discussions here, instead addressing other topics.
A common concern is when to alias a table in the data layer. In general, common dimension tables used across multiple fact tables, or contexts, should not be aliased. This is important in creating conformed dimensional reports. For example, a parts dimension is the same when used for reporting against sales or inventory fact tables. When a table has separate usages, or meanings, it should be aliased. For example, a general purpose address table, which contains branch, customer and employee addresses. These are often identified by having multiple join paths or filters which can signal different usage.
Another situation to use aliases is when there are multiple usages of a dimension table. The most common occurrence of this is the general-purpose date dimension table. Applications such as health and insurance can have many different joins to a date dimension, each of which has a different context. This can be complicated to track, because one usage, such as Admission Datemust apply in a conformed manner to many fact tables, while a different usage (Treatment Date)must also be conformed across many of the same dimension tables. Obviously a meaningful naming convention is required!
If you do decide that you’re logical query subject requires a different alias, make sure to make liberal use of the Remap to New Source option, available by right-clicking on the query subject.

Controlling Stitch Queries

One frequent point of confusion, is how the model causes reports to create stitch (or multiple) queries. To resolve this requires a firm understanding of the cardinality in relationships. Also realize that stitch queries are sometimes best way of issuing a query.
Report studio will generate simple, consistent queries when there is consistent progression from “1 to many” relationships. This is a smooth progression from dimensions to facts. It can easily generate facts with correct aggregations with this simple scenario. Remember that Cognos will treat the query subject at the end of one or series of 1:N relationships as fact tables.
Similarly, it also works well in the case where multiple “dimensions” converge on a single fact table, as shown below. An easy visual check is to make sure that the 1:N relationships proceed in the same direction, with the ultimate fact table serving as the “Many” end-point of several dimensions.
The stitch issue occurs when there is more than one “Many” side along the same dimension path, as shown below.
A query which utilizes both of the rightmost, or “fact” tables will result in a stitch query as the only relationship is via a comnformed “dimension” table. In a query where i nventory (fact) and sales (fact) are compared at a product (dimension) level, a stitch query is the correct approach. However, if this is an unintentional or poorly modeled relationship, incorrect or inefficient queries will be generated. Understanding the implications of cardinality in relationships is the best defense against this.

Naming Conventions and Nomenclature

Another area of confusion is naming conventions and nomenclature, and areas surrounding these subjects. The best practice is to have the query subject and element names all carefully thought out and implemented before developing the model, but that doesn’t always happen.
First, let’s describe how Framework Manager and report studio considers element names. Each element name is uniquely identified within the package by a 3 part name: The element itself, the parent query subject (or shortcut) name, and the parent namespace to the query subject. Element names must therefore be unique within the query subject, and similarly for query subjects within the namespace. However, the namespace name must be unique within the entire model. So you can have multiple references to element “Part Name” within query subject “Part Master”, but the namespace, such as “Purchasing” or “Inventory” can only occur once within the model, making the fully qualified name unique. Notice that folder names do not come into play when defining names. They are used only for organizing other items.

Managing Name Changes

The result of all this, is that names within the model are very important, especially once report writing begins, because they are difficult to change without invalidating existing reports.
So, in order of preference, maintain consistent names across models by:
  1. Get it right the first time. If multiple areas of the organization have a stake in naming conventions, involve all organizations early in the design process, even if the first stages do not apply to some of them.
  2. Identify situations where a renamed item is used. Here the option “Find Report Dependencies” is invaluable in identifying reports where an item is used. Keep in mind, though, that if your model references a development server, it will not identify reports or queries which exist only in your production environment. Therefore it may not adequately identify issues in every environment.
  1. Move the renamed item to a “deprecated” query subject folder, and create a similar element with the new name. While this relies on a certain amount of end-user training, to instruct report and query writers not to use objects within a folder named “DO NOT USE” or similar, it will ensure that existing reports will still run correctly, while using the new, correct name moving forward.

What’s new in IBM Cognos 10 Framework Manager


While Framework Manager didn’t receive the dramatic changes in IBM Cognos 10 that some other areas did, there are still some interesting and useful new features, which might have slipped under the radar with the new release.

The main new features are:
  • The ability to create “Durable Models”
  • Model Design Accelerator
  • Dynamic query Mode
There have also been several minor improvements in the way the IBM Cognos 10 handles SAP BW sources. If that’s an option that you are using, either contact Ironside Group and speak to one of our expert IBM Cognos/SAP consultants or check out the Framework Manager manual for details.
There is one notable feature that is deprecated in Cognos 10. Native Support for CVS and Microsoft Visual Source Save has been removed. Support for these and other code repositories is still available, but in a more generic fashion.
Let’s take a look that the major changes.

Durable Models

This is probably my favorite of the new features. The problem this feature addresses, is the situation where you want to rename Framework Manager objects, such as query subjects, query items, filters or namespaces after reports have already been written. If you change the name in the model and republish the package, the report will error out with references to data items which cannot be found. While there have been some workarounds for a while, this feature provides an elegant solution.
The key to making this work is to start with a new model, and select an unused language or dialect for the design language. I usually choose “English (New Zealand)” (EN-NZ). Then choose standard English (EN) as the active language. Finally, there is a new project property Use Design Local for Reference. You must also set this to true. This causes the report object names to be referenced internally in the report using the design language (EN-NZ or English–New Zealand) rather than the language of the report author (EN or base English). This is similar to authoring reports in multi-language environments, but requires no special actions on the part of the report author.

In this example, we added a data item called EXPENSE_GROUP_EN to the report
The Framework modeler never applied a user friendly name to the column. We add it to the report anyway, and save the report. The modeler then uses Framework Manager to correct the “Active Language” (EN) data item name to a business friendly name, such as “Expense Account Group”. The design language (EN-NZ) object name must not change.Republish the package. By simply running the report with the updated package, the renamed query item name and column heading is now shown in the report. This because the internal object reference in the report is set to the design language (NE-NZ or New Zealand ) name, which did not change. At run time, the report executes using the language of the user, and so displays the updated object name.
Looking at report studio, you will see that the report structure will reference the original EN-NZ description of the object (EXPENSE_GROUP_EN), which did not change. At run time, it will execute using the active language (EN or English base) name and column header, and therefore isolate the name change. Essentially, the report is built in one language (English – New Zealand) and run in a different language (English – base language). But this is all done without any special actions by the report author. And regardless of the language in which the report was authored, the object names are always stored internally using the model design language.
There are a couple of caveats to this method. Most obvious is that once you set the Design language name for an object, you must never change it. Therefore, you’ll need to have meaningful object names from the start. I suggest using the native database column names, where possible, as they are meaningful, not likely to change and are not required to be “business friendly” at this point.
Also, this techniques works for renaming objects only. It will not compensate for major structural changes, such as moving an object from one namespace to another.

Model Design Accelerator

The Model Design Accelerator is a graphical “wizard” which walks you through the design process for a relational model, complete with physical, logical, and presentation layers. Like most wizards, it doesn’t do much that you can’t do on your own, but does make the steps easier, while enforcing good design principles. You have the complete ability to modify your design manually, as usual.
You launch Model Design Accelerator from the Tools menu. The wizard presents you with a blank star schema model. You drag-and-drop items from your data source onto the fact and various dimension tables as needed. Simply add or delete as many new dimensions as you need. You can rename query subjects and items within the wizard, or later. When you are finished, Model Design Accelerator presents you with any warnings, then proceeds to generate the model, complete with the three layers, in separate namespaces.
Interestingly, it doesn’t automatically create a Dimensionally Modeled Relational (DMR) layer. That step is made easier, by directing you to think dimensionally about the model based on the star schema, but you still need to manually apply the various levels to the hierarchies, which can take some thought.
While this tool doesn’t technically add any functionality that didn’t already exist, it’s a nice feature, targeted towards the mid-level or beginner model designer. It will assist the occasional or new model designer think in dimensional terms and develop a solid three-layer model with minimal effort, and it does that admirably.

Dynamic Query Mode

Dynamic Query Mode (DQM) is an option to speed up query performance of certain cubes, by providing in-memory data caching. Dynamic Query Mode is limited to three specific OLAP data sources in the current release, namely TM1 version 9.5.1, Essbase versions 9 and 11, and SAP BW, version 7.1. Furthermore, the specific performance improvements you can expect differ according to the data source. In general terms, they support better null suppression, and improved repeatable performance, which is useful for the typical ad-hoc analysis style of query for which these cubes are often used.
There is a separate document dedicated just to DQM plus more information in a “Cookbook” on the IBM web site (http://www.ibm.com/software/data/cognos/customercenter/).

How to Standardize Unique Default Skins for Each of Your Different Environments


Many organizations utilize multiple Cognos environments as part of their development process, such as development, QA and production servers. It is not uncommon for users (especially system administrators and developers) to become confused about which server he/she is working on as all of the environments look the same. (logo, background, font etc). So how can you easily distinguish between these various environments?
Here we describe a simple technique to apply a common skin for each different Cognos environment regardless of what a user has set in their profile. This method can be used to set one skin in development, and another in the Test/QA system. In production, you can choose to allow each user to select a skin, or to enforce a standard.
When an individual user logs into Cognos for the first time, their individual profile is set. Because it is set at an individual level, administrative changes to the Cognos default profile will not affect them. For example if you had 50 experienced users and 10 new users, a change in the default profile will only affect the 10 new users when they log into Cognos for the first time.
Let’s begin:
1. We’ll be working under the assumption of a Cognos 8.3 installation and will be providing instruction on how to change the skin on one server. This process can be repeated for every server (environment).
First select a skin to be used as the default skin for each environment. With each Cognos installation there are several skin sets delivered. By default the Corporate skin is used.
  • Business
  • Classic
  • Contemporary
  • Corporate
  • Modern
  • Presentation
Let’s say we decided on this arrangement shown below:
Environment/ServerDefault Skin
Production EnvironmentCorporate
Development EnvironmentContemporary
QA EnvironmentModern
2. Let’s select an environment we want to change and log into that server. We will use the Development server for this example and based upon the matrix we created in step 1 we want to set the skin to Contemporary.
We can find the skin scheme files under:
Cognos root\webcontent\skins

A typical location on windows would be:
C:\Program Files\cognos\c8\webcontent\skins
You will notice the following folders within the skins folder. We’ll be replacing the content within these skins with the content from the contemporary skin.
2
3. Select all the folders, copy them and then paste them in the same directory. We’ll be using these new folders as a backup of the originals. In this particular screenshot you’ll see that 2 of the folders have been backed up. You may want to changes the names of these folders to something like “Original – contemporary” or some other similar naming convention.
3
4. Now let’s ensure that everyone on this server sees only the contemporary skin. Open the contemporary folder, select all sub-folders and files within it. Make a copy and paste (overwrite) the contents from this folder into all other skins folders (except the backups you just created). We have effectively changed the contents of every skin to be the contemporary skin. Regardless of the skin that a user has set in their profile, they will always see the contemporary skin. Should you want to roll back or make future changes, you can always refer to the backup folders that we created in step 3.
5. To confirm: Log into Cognos Connection. You will see the default skin has beenchanged (green scheme)
5
6. Repeat these steps for each environment, choosing a different skin as the default and copying the contents to the folders of every other skin.
Note: If you or any user can’t see the change immediately, you may need to clean the cached web pages/content from your web browser first. Here are the instructions for IE 6:
· Go to Tools->Internet Options
6
  • Temporary Internet Files section click on Delete Files button. It opens a small Delete Files window. Check “Delete all offline content”.6a
  • Click Ok. Then Ok.
  • Launch a new Cognos session. You should now see the new skin.