Wednesday, December 29, 2004

Shifting to the Blogger

So finally I have decided to move to the Blogger from the JRoller due to a lot of issues that I was not comfortable with. I will be shifting all of my blog entries to this place over next few days.

Saturday, September 18, 2004

Role, Role everywhere and not one is job description...

It has been a long time since I blogged because I am working on another piece which is too broad and large and is keeping me away from the blogging on few quick topics that I wanted to talk about. Basically this topic comes is a result of a small discussion that I had with few people on Roles. The idea of Roles in theoretical world has been about job description (<self audulation>see here for more information</self audulation>). That is the role that you are assigned to should reflect the job description that you have. For example if you are having a job description of Trader then this is the role that all the applications should use to provide necessary access to the necessary resource. But just like what happens with a lot of other concept, the basic idea takes a complete backseat and the implementations are a different ball game. Based on infrastructure applications (especially portal infrastructures) that we seen in the wild, the number of roles that companies have are anywhere from 200 to 2000 (and counting) based on the number of applications that they have in production. Now it is perfectly possible that a multi-national corporation can have 2000 roles for a 50,000 to 100,000 employees, but that typically is not the case in most of the instances that we have seen. The culprit seems to be some thing else and that is the idea that role is a application specific entity rather than enterprise level entity. I am sure the people that have infrastructure in place already know what I am talking about :) In most of the cases that I have seen the roles are defined on application level and people are assigned to these roles to provide access. This architecture was great in the time when each application was on its own, developing the entire authentication and authorization functionality within their product. But with the new single sign-on and provisioning solutions that are being put in place this should have become thing of past. But that does not seems to be the case people have continued to use the roles as an application level access entity and taken the easy way out. I completely understand that meeting deadlines is not possible for the applications trying to integrate with SSO solutions, given that most of them may not want to integrate with the SSO in first place but have to do because top brass is pushing for it. But just like the push for SSO integration is coming from the top, some thought must be given to the idea of treating the roles as sacred entity like designation and try to implement the role structure. But again the corporate is not entirely to be blamed, because they will raise the question what about the legacy systems and third party applications that provide their own role model. In such scenarios the SSO and provisioning products have to step up to be able to provide role mapping facility on per application basis if they are being used to provide the information to application either at management(creation/update of identity) or at runtime (passing the roles/group user belongs to as header variable to backend application). Anyway the design of role itself in some cases shows the limitation of the product or thinking being applied. I have seen people design the roles which are self describing like read_all_accounts and trade_nse. Again this probably is more due to the limitation of the authorization products in market which do not provide very good framework for policy implementation.

Saturday, June 26, 2004

Identity and Access Management - Part III Access Management

In past few days a lot of discussions and past memories have resurfaced that has helped me bring together my ideas on the Access management piece of the Identity Access Management. So this is an attempt at putting together all those thoughts and ideas that I have heard from other people and some that I understood. See these locations for more details
  1. Tutorial of American National Standard on Role Back Access Control
  2. Types of Access Control
  4. TISSEC(search for access control)

What is Access Control

Access Control is the mechanism by which a resource / object manager restricts the actions / operations that an identified user or Subject (including anonymous users) can perform on a resource or object based on predefined policy. Based on this simple definition we can see that following are the basic components of Access Control
  1. Subject The person, process, any physical or logical entity or group of entity who can be identified uniquely in a Access Control system / domain.
  2. Object / Resource The resource that the Subject wants to perform some activity on!
  3. Action / Operation The activity (verb) that can be performed. These activity are typically valid for particular type of object i.e. you can "read"(action) a "file"(resource) or a "book" but to "read"(action) a "car"(resource) is meaningless in day to day conversation(somebody can always find a deeper meaning in to this type of reading).
  4. Policy The policies are basically a system with a set of rules or axioms which have to be followed while making any decisions in a way similar to Riddles
What is Policy, Constraints and Context?
The Policy is a set of rules which are in following format "what actions the subject(s) can or can not perform on various objects under specific constraints" The constraints take into consideration the context in which access control decision is taking place. The policy allows the access control system to answer the following question as a yes, no or indeterminate "Can X(Subject) perform Y(Action) on the Z(Object) in the Context?" The context is additional information about the environment, subject, object and / or action that may be used to make the access decision. Let us take an example to better understand this concept. Lets assume we have access control system which has the following policy John(Subject) can play (Action) with the ball(Object) if color of ball is not black and it is evening(constraint) Looking at the constraint of the policy we can see how the attribute of ball(color) and environment(time to play) is used to provide the context in which this policy would be valid and thus provides constraint on the policy. So while taking the decision on whether to allow John to play with ball, access control system has to have the idea about what is the time when this decision is being made and what is color of the ball. Now given that we have a policy the we can ask the following questions to access control system "Can Adam(Subject) play(Action) with the ball(Object) given that color of ball is black and it is morning(Context)?" "Can John(Subject) play(Action) with the ball(Object) given that color of ball is blue and it is evening(Context)?" "Can John(Subject) play(Action) with the ball(Object) given that color of ball is black and it is night(Context)?" "Can John(Subject) play(Action) with the ball(Object) given that color of ball is blue(Context)?" "Can John(Subject) keep(Action) the ball(Object) given that color of ball is blue and it is evening(Context)?" "Can John(Subject) play(Action) with the bat(Object) given that color of bat is blue and it is evening(Context)?" This example should give an idea as to why the answers can be yes, no or indeterminate depending on the question asked and policy definition. Some time policy design tries to combine constraints with object or subject or action to achieve a similar policy. For example we can express the policy in the example above so that the policy does not have an explicit constraint and it looks like John(Subject) can play in the evening (Action) with the blue ball(Object) John(Subject) can not play in the evening (Action) with the black ball(Object) John(Subject) can not play in the night (Action) with the red ball(Object) John(Subject) can not play in the night (Action) with the blue ball(Object) Even though it is not a great example but we can see how the same policy can be expressed in variety of ways by choosing granularity of subjects, actions or objects and expression of constraints. This is a very important idea to keep in mind when designing an extensible policy.

Access Control Models

Over time a variety of access control models have evolved and a basic definition of the model can be found here. Over last decade the idea of Role Based Access Control has grown and to some extend reached a mythical status. We should concentrate on this model since this is what is mostly used for implementing access control.
Rule-based Role based Access Control
The RBAC is basically a brilliant idea of inserting another level of abstraction between user and policy so that users are assigned to roles and privilege (combination of action and objects) are assigned to roles. So instead of saying John(Subject) can play (Action) with the ball(Object) if color of ball is not black and it is evening(constraint) the role Child can be introduced so that A Child(Role) can play (Action) with the ball(Object) if color of ball is not black and it is evening(constraint) John(Subject) is a A Child(Role) This level of abstraction breaks the security policy into two parts i.e. defining the access control using roles without knowledge of all the users and defining the user - role relationship as the new users are added to the domains without changing the basic access policy. This helps a lot in evaluating the policy to find security holes and potential conflicts in access control. Besides that over time researchers have found that the idea of roles allows to build some additional rules into policy which may not be that simple to express in policies without roles. These additional concepts associated with RBAC are as follows
  • Hierarchical Roles[NIST] This basically is the idea of inheritance of roles so that
    • Senior Role acquire privilege of their juniors
    • Junior Role acquires user membership of seniors
    So continuing our example if we define that Child role has 3 senior roles age(0-7), age(8-13), age(14-18) then
    • A Child(Role) can play (Action) with the ball(Object) if color of ball is not black and it is evening(constraint)
    • John belongs to age(0-7) implies
      1. John(Subject) can play (Action) with the ball(Object) if color of ball is not black and it is evening(constraint)
      2. John belongs to Child
    The hierarchy can be as complex as required and roles can multiple senior and junior roles or it can follow simpler hierarchy like a role tree(where a role is allowed to have only one senior). Only important thing to remember is that there should not be any cyclic assignment so that a role is a senior and junior of itself due to role hierarchy. Another comment that I would like to make is getting hierarchy of roles correct is typically tough and thus people while designing the role hierarchy should either keep the role structure very flat or use the definition of senior and junior role to evaluate each role so that you do not make basic mistakes (like equating job hierarchy with role hierarchy).
  • Separation of Duty(SOD)/ Mutually exclusive roles / Policy Constraints[NIST & TISSEC] This is a very important idea especially in current environment where compliance, conflict of interest and chinese walls are the buzz words. The basic idea with regards to SOD is that some of the actions on the object can not be completed by same person; for example same person can not be accountant and auditor for the same company. The policy constraint is a super set of SOD in the sense that it refers to other constraints that policy must follow. For example a policy may have constraint that a particular role can not have more than 2 users and the access control system should be able to ensure that such rule is considered during role assignment. These policy constraint can be at the following levels
    • User i.e. two person can not belong to same role or have same privilege or a role can not have more than X person / subjects(cardinal constraints). For example two persons can not check-in the same file from the version control system(after being checked out)
    • Role i.e. same person can not belong to two separate roles. For example John can not belong to both age(0-7) and age(8-13)
    • Privilege(Action & Objects) i.e. same person can not have two different privilege. For example John can not belong to both create and approve the same request but he may be able to approve requests created by other user.
    • Constraints i.e. same person can not have two different privilege due to constraints. For example if John can not create requests which will cost less than 500 then he should be able to approve only the requests that are below 500.
    NOTE:The constraint in research papers typically refer to Policy Constraints which are basically set of rules that may be applied at the time of assigning the users to roles or when establishing session(see implementation of SOD). This is different from the constraint that we discussed being part of the rule which are invoked during runtime to evaluate access permission. Even though I have specified the various possibilities above, some of them like constraint level policy constraints is something, I have not seen being discussed in either products or in papers on RBAC or access control for that matter(May be I am missing something). For that matter I have not seen a lot of discussions on constraints, as I have described here, in research papers. The implementation of separation of duty can be done in the following ways
    • Static i.e. policy constraints validation is done at the point of assigning the user to role (a administrative function)
    • Dynamic i.e. policy constraints validation is done at the point of activating the roles for the user. This function is an advance version of Weblogic Role Mapper. So the roles can be selectively "activated" and "deactivated" based on access requirements of the user or additional policy constraints(like both auditor and accountant role can not be activated at the same time or trader role can be active only between 9:30 to 4:30). NOTE: Discussions in literature tie Dynamic activation to User's Session making it similar to Weblogic RoleMapper (or may be I am interpreting it wrong). I think this is a under utilization of this concept (for example trader role where user can continue to have a session with the resource manager in same session but will have his trader role deactivated)
In the end I would just like to put down that RBAC is not solution to all the problems. Not all the policy requirement can be designed using Roles. For example take the case of a trader that is going on vacation and wants to delegates his responsibility to specific user. It would be easier to write a rule using user instead of developing role based access control. So even though the RBAC fits into most of the scenario, lets keep in mind that it does have some limitations.
Group vs. Role
This is a good old debate that keeps coming up given the similarities in the two. See here for a good explanation.

Policy Design

This is really a very large topic and I have not seen any best practice or even an introduction to this field. So this is mostly my understanding based on some experience I have. So basically most of the time the policy design is restricted by the rule design interface provided by the Access Control interface. For example if interface does not allow to use IP of the client in the defining rules then there is not much you can do about it :( So let us see what are the various interfaces that products provide for policy design and implementation.
Access Control List
This is one of the most common access control interface available in wide variety of products. It defines which users have access to what resources in a system. The complexity of the access definition varies a lot. The simpler models allow to define which user has access to what resource. At the same time the complex model may allow to allow or Deny particular action on specific resource for a user or a group. An important point to note is that typically ACL based systems do not allow you to write rule-based constraints. Due to the prevalence of this model, most people start thinking about Access Control using this model, which can be problem if you are designing access control for a rule-based access control engine.
Object Access Policy
This is an access control policy without the identity i.e. the access control applies to all the users. So for example anybody who is accessing a sensitive HR application should be required to sign-in using token authentication. The firewall rules to open specific ports can also be classified as Object Access Policy given that it applies to all the users that are trying to access particular ports of the systems. The implementation of this policy model may allow to write rule-based constraints, but at the same time the complexity of the rule that can be written or the attributes/environment variables that can be used in these rules varies a lot.
Rule-based Role Based Access Control
This seems to be the best combination of all the features of access control and at the same time one of the toughest to get right. A lot of SSO products are moving in this direction with different level of success. At the same time with increasing support for concept of identity in network, the network access control product like firewalls will also grow in this direction. The laws in recent years has made Privacy support in the products very important. The access control is a very important aspect of privacy management (though it has other aspects to it like data anonymity, data encryption and so on) and these privacy management product should also be a very good place to see how access control field evolves. Another evolving field in this area is the Digital Right Management which deals with the idea of how resource interacts with the resource manager to provide access control information which is then enforced by the resource manager. This field should be a good place to see how the things evolve.
Policy Constraints
Most of the policy design is done by using policy language provided by access control system to describe the business rules for accessing the application/resource. But most of the times these access control system comes with a default set of policy constraints(may allow to create these constraints) which must be followed by all the access rules. These constraints could be added to optimize the rules engine, reduce the number of indeterminate results(which may require manual intervention for correction) or to provide better security out-of-box. I will like to briefly put down some of these constraints that I have seen in wild.
  • Default Deny/Allow This is the basic policy that defines that if the access for a particular user can not be determined, deny/allow access to that resource automatically. Based on the security or manageability requirements, policy may provide capability to allow or deny by default. So for example if you are at home and some stranger requests your permission comes in, you are most probably going to deny the entry while if you are a casino owner in Las Vegas you are going to allow anybody to enter your premises unless that person does not figure on the Nevada's black book
  • Policy Override vs. Policy Inheritance The idea being that if some body has access to your house using a key it may automatically mean that the person has access to your bedroom (the bedroom being the resource that inherits the access control policy from the house), but typically will not have access to your safe which uses a different key( the access control policy of the safe over rides the access control policy of the house). This is not exactly a good example because the identity(i.e. the key to house) is different at each level, but it should give you the basic idea. A slightly better example can be a portal which provides access to a variety of application with each having its own access policy(but application trust portal to provide the identity) which require that you be granted specific permission for access. This means that the access policy of particular application overrides the access policy of the portal. But once you are into that application, the application may allow you to access any resource (policy inheritance by the application resource). This idea is based on the existence of a hierarchy. Even though this hierarchy is mostly used in the context of resources, it can as well be applied to group or roles (role hierarchy is basically the case of policy inheritance).
  • Deny overrides Allow This is a basic security constraint (and solves indeterminate cases) that is typically applied by the access control systems. This ensures that if the user has both allow and deny permission based on the policy, the access system will deny the access to the user.
  • Insufficient information implies DENY/Allow Some times the policy may require additional information to make a access control decision. For example if a stranger wants to get into your house for some discussion, but is unable to produce an appropriate badge, you may deny him the access. On the other hand, if you are owner of a hotel which is not required by law to verify the identity you may allow customers to stay even though they can not show a valid photo identity like car license(this is very much possible in parts of world, including newyork, where the public transport system is developed to a level that people do not need to have a car license to survive).
I will add more to this as I come across these constraints. Besides these policy level constraints, these are set of constraints that are applicable to resource(s) or role(s). For example a role can have cardinality limit so that a role can not have more than X numbers of subjects assigned or a resource has only specific set of actions that are valid (going back to the example that reading a car does not make sense). This(Section 4) has some good constrainsts as examples. This should give a basic idea about what the access control is all about!! Lets us try to look deeper into what Access Control Systems typically do in terms of implementation of these concepts.


Most of the literature talks about two components of the Access Control Runtime i.e.
  • Acess Enforcement Point(AEP) which basically is the component of the resource manager that takes the appropriate actions to allows or denies the access to the resource. So for example in case of a Web Server, it could be a plugin that is invoked for every request being made and this plugin takes appropriate actions incase the resource can be accessed(like invoking another component to fetch and return the web page to requestor) or can not be accessed(returns the access denied page). Typically the AEP can be devided into two types
    • Adapters - These are implementation modules that are specifically built to integrate with third party products which have published interfaces.
    • API - The inhouse applications can use this inteface to utilize the ADP functionality without bothering to have to implement it in their product.
  • Access Decision Point(ADP) the component of the Resource Manager(or outside the perview of resource manager) which evaluates the policy based on the input from AEP, policy database and additional runtime sources and return whether the Subject should be allowed particular action on the specific resource i.e. it answers the question "Can Subject X perform action Y on Resource Z?"
These two components can communicate with each other using proprietary(like function call, IPC, binary protocols) or standard(XACML). I do not feel that XML is the best mode of communication because most of the time access control systems have to work in high performance environment where XML can be a big drag. At the same time it can be a component of a slower workflow based system which does not have high performance requirements. Another variation of using XACML could be by using its schema to develop a binary schema for query and response. Another important point to remember is the idea that AEP is separate from ADP and thus ADP should authenticate the AEP before it provides the decision or any additional information as the part of the answer. This is an important step to avoid the situation where a rogue AEP can access the ADP to understand the policy model or may even be able to extract information from ADP which can then be used to attack the access control system or the authentication system for specific identity (the idea being that if you know janitor has access to more rooms than CEO then you will try to get access to janitor's identity instead of CEO's identity). Anyway let us try to define a typical use case for the access control system
  1. User tries to access a resource through its resource manager.
  2. The resource manager's AEP verifies with ADP whether resource can be provided to anonymous user (this step does not take place all the time).
  3. If the resource can not be provided, the User is required to identify and authenticated himself through the authentication module
  4. After the authentication is completed, user is assigned a credential or a token (valid for the duration of session or specific time) which user can then provide to any access control module familiar with the token. The token allows us to design systems where the authentication and access control systems can exist separately and you are not required to authenticate every time you want to access a resource. This token typically contains all the information associated with the user that should be required to make access control decision (this may require the token generator to access a variety of repository, perform identity and attributemapping to generate a token that has all the relevant information) . This token can be a kerberos ticket on the network or a badge at the convention which allows you to access the premium seminars that you have paid for. Important thing to remember is that a token generated for specific domain is valid only in that domain. So you can not use your blockbluster pass to get access to pentagon. The security of token is a very important thing to be considered while designing them to ensure that token can not be counterfeited. It may also be important to validate the issuer of the token for trust purpose.
  5. After it has been provided, the user can provide this token to resource manager for validation and resource manager's AEP, with help of ADP, can decide whether continue with the requested action / operation on the resource. In order to make this decision, besides the user information(typically part of the token) and policy data, the ADP may require additional information about resource and environment (the context of the decision) to make the decision. All the required information may not be available with the ADP and it may need to contact other repositories at runtime to gather all the information and then make the decision. At the sametime the AEP may also provide the information which are more contextual in nature (like client IP address, HTTP Request headers and so on) to ADP to complete the decision making process. This idea of separation of access policy and data required for evaluation is very important and can be exploited to build simpler systems.
  6. The decision made by ADP is returned to AEP which will take appropriate action as determined. Some times the ADP may return additional information to AEP along with the decision so that AEP can use the information to provide appropriate resources to requestor. For example, in case of Portal Application, instead of asking "does user have access to Application A?" 100 times for 100 applications, AEP can ask the question "What application does user have access to?" and the ADP can return the information to AEP which can then use this information to paint the portal for the user.


The management of the Access Control deals with
  • User Management This deals with management (Creation, Updation and deletion) of User (and associated information), roles and user's assignment to roles. The user information can be managed by the access control system itself or it can be managed in a separate repository and imported in to access control systems via push or pull model. The roles are typically are managed by the access control system and it typically allows to manage roles, their hierarchy and any additional constraints that may be applicable(like separation of duty). The User's assignment to roles can typically either managed by the access control or it can be provided at runtime (as part of the token for example). The static assignments are typically handled by the Access control systems and it may also provide ability to write assignment rules which are evaluated at runtime to get the list of roles that user belongs to (This is not very common so far).
  • Resource and action Management This deals with the managing the resource and associated actions. Even though there a variety of ways of modelling resources hierarchy, tree based representation is most common method with each resource with one and only one resource as parent and 0 or more child resources. Action is something which gets tied to the resources and product may provide capability to define the action set that is possible for a type of resource with resource belonging to a resource type.
  • Policy Design, storage and replication This is the core of the access control system. This provides the facility to create policies i.e. develop rules which allow or deny user or roles to perform certain actions on the resources under the given constraints. This management function also has to deal with how to store the policy thus developed and the procedure to use for making them available to ADP via push, pull or other model.
  • Policy Provisioning In the world of centralized policy management with hetrogenious ADP, the concept of provisioning the policy to these ADP become very important. So the idea is that policies are developed using a single policy model and then these policies are translated to the policy language understood by particular access decision point for enforcement. This can be very tough to achieve especially if features offered by the policy design system are advance than the legacy resource manager which may provide very basic support for Access Control modelling. Such constraints may require remodelling the policy or otherwise may result in partial implementation of access control .
  • Data integration even though this has been referred to in user and resource management, it is a very important piece of access control. This defines how the policy engine (ADP) gets the information that it needs during runtime to make the access decision. The data can come from
    1. Requestor - in form of token or any additonal information
    2. Policy Database - most of the policy database have provision to store and manage the information about user, roles, resources, etc. This information can be used at runtime for evaluation.
    3. External/trusted repository - the policy engine may receive data from external repository via push or pull model during evaluation of the policy(at runtime) or in offline mode.
    The management and ensuring the accuracy of the source of data is very important and data conflict may have to be solved (for example requestor and policy database both may have the same data with different values and it may be required to decide which data source take precedence )
  • Audit This forms a very important part of the Access Control. Even though most of the products generate some kind of audit, it is mostly left in the archives of the company vault till they can be destroyed according to the policy. But application audit monitoring in conjunction with IDS, IPS, honeypods (Intrusion technology) can be a very strong data mining tools. For example a consistent attack on an application can mean that intruder has been successful in compromizing the internal computer and thus can help to fight the attacks especially the one based on SSL Enabled protocols . At the same time the change in pattern of application audit should work as an alarm. But so far I have not seem at most of my clients any drive to bring these two technologies together and utilize data mining to generate patterns which can then be used by IPS for better management.
This concludes a brief discussion on the access control.

Saturday, June 05, 2004

Identity and Access Management Infrastructure

I have been thinking for some time about the possibility of developing an Identity and Access Management architecture using existing Opensource products. There where some ideas that I had with regards to component that I can use for example OpenLDAP and MySQL as Directory and Database respectively, Apache as the webserver and so on. But in order to do an end to end architecture, I thought of starting with a documented architecture which tries to accommodate as many IAM concepts as possible.
The image below is an attempt at the same and I already know that I have not covered all the concepts that I could think of. But at the same time, this would be a good exercise in understanding where the opensource is with regards to developing a complete solution.

Sunday, April 04, 2004

Federated Identity Management Product or what you should remember when buying a product

It has been a long time since I last wrote something, but FIM is something that I see people doing even without realising that they are doing it. I will try to list some of the use cases (which can be mapped to concept of profiles in SAML or Liberty world) that are part of the general specifications and some that are not. This article does not provide an introduction, but you can read here to better understand what I am talking about. Just like any previous article, I would like to break down the usecases into two parts
  • Runtime These usecases typically occurs every time the user hops between sites that are part of a federation(that has such a star-trek era feel to it). This typically deals with how the information is passed from one site to another site when the user is doing site hoping resulting in session establishment. Besides that it would also include auditing all these events for monitoring and reporting purpose.
  • Management These usecases basically talk about the management aspect of the FIM and typically would occur out-of-band/first time user accesses the new site/last time user accesses one of the site. This deals with the idea of token transfer protocol (including trust establishment between sites), the identity mapping between two sites and other configuration like authentication level mapping. These management events must be audited to ensure that all the changes can be tracked and used for policy enforcement validation.


The FIM can be applied to a different categories of applications the most important of which are as follows
  • Web Based cross domain Single Sign On
  • Web Service Authentication
  • Client-Server or distributed applications with properietory protocols.
Rest of the article will discuss FIM as it applies to Web Based application with Cross Domain Single Sign On requirements. The basic use case can be defined as follows
  1. User access and authenticates at site 1 using specific authentication method which sets up the context for the user(here after referred to as session)
  2. User then clicks on specially formatted URL which generates a token and then redirects User to GET or POST)
  3. On the Site 2, if token is not received via redirect, it is retrieved from Site 1 by Site 2. This token is then validated with regards to whether it is from site 1
  4. After the token has been validated to be from site 1, the user information provided by site 1 must be mapped to site 2 and a context(session) would be setup with the information passed from site 1.
  5. After the user has completed the work he would logout from Site 1 and which time the site 2 would be required to logout the user.


The basic idea about the management is of identity federation and trust setup. Trust setup would typically be performed out-of-band. The identity federation or mapping the identities between two sites, can happen out-of-band or during the first visit to federated site. At the same time identity federation should also address the requirement to break the the identity mapping. These management setup may be performed as self-service or by administrator of two sites.


Most of the FIM systems would consists of the following components as introduced here.
  • TRUST: The basis of all the FIM is trust. Besides the legal aspect of it, the technological aspect of trust can be established in variety of ways. Some of the simple ways to setup trust are
    1. Shared Secret between sites
    2. Public-Private Key pair and/or Mutually authenticated SSL between sites
    3. User's IP address range(if site is being accessed from intranet for example) for access
    4. contact Site's IP address range
    5. ID/Password pair
    Besides that the ideas are around with regards to third party Trust parties, trust brokering services and so on. I donot expect to see them in the market as requirement in near future.
  • Identity and attribute Mapping The identity from site A must be mapped to identity from site B at runtime. This mapping can be
    1. One to One between the two sites
    2. One to Many incase when the same user from one site may be administrator(for the company) and user of the other site.
    3. Many to one incase the set of employees are given access to a paid site through a standard account id.
    Besides the identity itself, the token can have additional information like user's attributes(like address, etc) and group/roles(at site A or site B). This information would be used by the destination site to build the context for the user. At the same time, the destination site can have it's own set of information about the mapped identity and may be added to the context for the user. With the concept of mapping and aggregating user information the following must be considered
    • Whether site 1 information over-rides the site 2 information or the otherway.
    • Incase of multiple identity matching how is identity selected (may be using simple rule like which domain name is being used or what is IP addresses of client to access web site. It may alternately provide identity choice to user)
  • Session Management Although session management has not been seen important from the point of view of standards, I think this is somthing that would be important in the wild. Some of the issues that would have to be addressed in the process would be as follows
    • Session Timeout: This is about how the session timeout would be set for the federated site. Would that be based on the timeout of main site or decided by destination site.
    • Session Logout: Even though the concept of universal timeout has been made part of the SAML 2.0, it does not addresses how the sites would manage concept of site specific logouts. This may be important from the point of view of Quality Of Service and service metering.
  • Authentication Module Even though the authentication is not part of FIM per se, the authentication mode is used by most sites to setup the access control form user. For example, the user may have been authenticated using basic Id/Pwd at the site 1 and then needs to access some information with higher authentication requirement. In such step-up authentication requirements the federated site may ask user to perform step-up authentication at site 2 or redirect user to site 1 for step-up authentication or may be just display access denied. In a large federation it would be important to decide whether the point of entry authentication be a specific site or would all the sites allow authentication using same information(i.e. ID password combination) and then allow the user to SSO to any other site that are part of federation. These issues also come into play in case of bookmarks(Passive Requestor Profile) for federated sites. When user tries to access the bookmark, the site in question may have to redirect to original site for authentication(if user is accessing particular Domain Name or URL pattern) or it may itself provide user to login into the its own site.
These requirements are some of the ideas that I think the products should provide. In next 2-3 years these use cases may appear in wild and we should be ready to solve them.

Sunday, February 29, 2004

FIM(Federated Identity Management) based Security Services

After writing a previous post and discussing that FIM is really far away, I read a good article on Digital ID world on FIM which really forced me to think how this game may play out over time.

What is FIM?

From my point of view it is a use case, in real world, of the basic idea that user should not be bothered to login by each and every resource they want to accessed(SSO). So once user has authenticated with one resource manager or standalone authentication product, all the other resource manager(lets call them trusting party) that TRUST the particular resource manager or standalone product(lets call it trusted party) will accept the identity provided by the trusted party. We have here three participants i.e. user, trusted party and trusting party. Does not that remind you of PKI? Well may be not but it does to me and so let me pickup that thread of thought.

PKI vs FIM or why FIM may succeed where PKI failed? Lets try to dissect the PKI failure. Some of the possible reasons may have been

  • immature technology vendors and their products(this may have been more of a chicken and egg situation)
  • high distribution and maintainance costs for retail customers
  • pre-911/Slammer easy-going attitude on security
  • secure delivery and Storage of private key(whether browser or smart card)
  • privacy issue on customer side
  • Global registry of trusted CAs, complex revocation procedure
  • CA's inability to take up the Liability on the identity of sender esp. in international systems in open PKI.
  • business requirement and technology are inseparable i.e. Public Key and private key have to be used for the SSO infrastructure to work.
Even though the list above may not be the complete and some of them were addressed to a degree by maturity in the PKI on business side. This has provided FIM a better chance to survive since people do not have to learn the lessons as in case of PKI (or may be we will not learn!!). Now let us see some of ways in which FIM is different from the PKI
  • Duration of trust: An important issue with the PKI is that the duration for which the trusting party is ready to accept the user is defined by the duration for which the certificate is valid(unless the CRL infrastructure is in place which kind of provides a Go No-go feature). Incase of FIM the duration of trust can be configured and limited during the initial sign-on which should be helpful in developing policies for integrated re-authentications, quality of service requirements and may be other great uses.
  • Degree of trust: The PKI was so much dependent on key architecture, it was impossible for other authentication strategy to survive, which pissed a lot of people who did not want to establish a PKI for a simple website. The FIM does not set any such requirements on authentication and demarks the authentication and trust establishment as two separate domain controlled by their own rules. This implies that the trusted party and user can decide what type of authentication they would like to have and at the same time it allows trusted party and trusting party to come to agreement on authentication mechanism- level of thrust mapping. For example password based authentication may map to lowest level of trust and SecurID may map to highest level of trust in case of email website, while securID may map to lowest level of trust and fingerprint scan may map to highest level of requirement for a corporate financial transaction.
  • Privacy: An important issue with certificates is that it binds person with the certificate and people have to develop the policies around it to address the intent or use of the certificate. Some of these were solved by adding more information to certificates like usage policy but this lead to all or nothing i.e. user had to provide all the information or no information to sites. FIM addresses these basic issues by providing ways to tie together multiple identities, user's role information and additional information specific to user on a per-trusted party basis instead of all-or-nothing case of PKI. The support of roles allow implementing delegation which was not possible without multiplying the number of certificates client had to manage in case of PKI.
  • Competitive MarketThe biggest hinderance to PKI was that vendors were banking that they would be global directory for certificates and that lead them to push for open pki. At the same time the browser's PKI component implementations required that a seamless integration was available only to selected few whose CA certificate could make to the browser. Similar idea of one or two very large trusted party in the field of FIM has also not taken off in big way. This is where the WS-Federation and Libery Alliance have advantage. These specifications allows development of closed FIM communities(their PKI equivalence have been more successful) where the trusted party become the pivot which can brings together users and trusting parties. This to some extent opens the market to competition and allow trusted parties to compete for trusted party and users which may be benificial to the market as a whole. It would be interesting to see whether existing branded portals and e-commerce sites (or similar large repository of user identities) jump onto this bandwagon to generate additional revenue in a role similar to that of banks in Credit Card business.
Components of FIM Basically most of these components are addressed by specifications, but at the same time they are not completely defined by such specification.
  • Trust/Liability/Contracts on paper and its enforcement in implementation Basically trust forms the major part of any FIM. This can be achieved technologically and liability arising from its violation is limited/transfered by contracts and insurance. So, it is important to decide on the security used for transport of information(asymmetric key based, shared-secret based, hybrid or leased/secure lines), system that is producer and consumer of information( security policies for these components should be agreed upon and if required mapped to companies security policy), system that store the information and at the same time set policies and checks to control the damage.
  • Authentication Modules This is a component is present on trusted party system. The user connects to trusted party (either by redirection when user tries to access resource or directly) and uses one of the authentication process (like form-based, basic authentication, SPNEGO, SecurID, fingerprint scanner) to send the authentication information to trusted party. If the authentication is successful, the trusted party starts a tracking system/session for the user and generates a token for the duration of session with application specific user information (it almost looks like we are talking kerberos now). The user can be then be redirected back to resource with token in variety of ways(see SAML for more information about supported protocols). The information provided by token helps resource manager to setup an session for user with associated information about its role, relevant attributes like preferences. Additional management information (like session life-time, authentication level, account status, session tracking ID) may be passed by trusted to trusting system (NEED TO FIND how that fits into Liberty/SAML). Rest of the policy information like what the user/its role has access to would typically be managed by trusting party, but in some cases this information may be passed by trusted party to trusting party during initial-SignOn.
  • Identity ManagementThe trusted party is expected to have an identity management system in place and it will have to be integrated with identity system on trusting party side. This is required to manage creation of the user id and management of its attributes, password(reset), trusting party service (self-)registration. The identity mapping information would be pushed to trusting party at runtime or out-of band, and may have associated workflow that requires input and validations from all the parties(SPML is good candidate for such requirements). Another important part is flow of identity information from trusted party to trusting party and from users to trusting party. A lot of time the trusting party may have additional or existing source of identities which may need to be made available to trusted party so that users can use the information to tie together all their identities or there might be financial account information which user may not want to leave with trusted party(well it is not trusted that much ;) ). Besides that, it is important to form policies on dealing with identity name clashes, multiple identity on trusting site for one trusted party's user identity and vise versa.
  • Session Management Interface Though not part of most specifications, I think it is an important component. Defining what the session is in context of trusted party and trusting party, how can information related to user or session be propagated to all the concerned party and how the trusted party or trusting party will react to such notifications at runtime would help every body who is part of FIM community to design their system to take care of various usecases.
  • Liberty Alliance/WS-Federation specification implementation Well this may be simplest part and available out of box from various vendors.
  • Legacy system integration This basically will consists of those applications that can not be updated, due to various reasons, to integrate with FIM infrastructure. It may be interesting what trusted party would make available for managing such requirements.
This kind of completes the basics on FIM as a security service. I am not sure whether a complete picture has been captured....

Sunday, February 22, 2004

SSO and Web Hosting companies/Telco

Over last few months, something that I have been thinking why have the hosting companies not started providing sign-on services. It is a chance for both the hosting companies to provide this important service and at the same time allow the chosen vendor to prove how well its product works. But then after some deliberation this is what came out
  1. Where is the Apache/tomcat of SSO? Well if look at most of the companies that provide very low cost hosting service(and hence have very high volume), are able to keep them low by using free software and so till an open-source stable system is available, the guys are not going to bother about this. But at the same time, the SSO vendor can do some kind of strategic partnership with a big hosting company and use their solution as a reference implementation. This is something similar to what IBM has done when it provided DB2 to am not sure about this?) and you find it in a lot of places
  2. How confident are we?: In order for that to happen, the vendor itself has to be confident about its product. Even after almost 2-3 years since some of the products came to market, some of the products have their limits when it comes to deployment capacity and stability. At the same time to be fair implementing a SSO is a complex challenge in itself and so far complete suites are not available that target hosting companies (i.e. a combination of products that will help migrating existing hosting to SSO platform).
  3. Is it worth it?: So how would a company go about hosting such a service. I guess most of companies have apache servers serving multiple domains. The SSO product would be installed on this and configured to protect specific set of domain. Then there would be directory/database that would have to be managed with so many identities and passwords. So far the identity has been distributed across different application being hosted. Now this needs to be consolidated and brought into single place. Will the existing products be able to handle the onslaught? May be.. may be not...having 15-20K is one thing and 1Million is a totally different beast. I have seen products that can take onslaughts of that order, but what is that going to add to user experience(and what kind of hardware upgrade be required) or a different architecture of distributed, indexed database may need to be developed so that smaller servers and databases can be used to attack the beast(may be vendors need to learn something from google on that ). Last but not the least a simple integration process should be available before the hosted applications can be migrated. In order to allow existing application to continue, the products should send the id and password specific to the back-end application for authentication. This automatically means the whole issue of identity and password mapping and synchronization comes in. Where will the new identities be created or where will password changes and reset be managed or which password reset system will be used(at the application user registration console or SSO registration console) and how will that be added to applications' database(this is the job for superman aka "stable metadirectory with very simple user interface for configuration" !!). Now as we go along, what are the approaches available for backend application integration. Most of the time it can be the simple header variable based integration. Given that the products need to grow inorder to make such deployments simpler, it may not be right time for implementation. The lessons learned from the Credit card processing services available for hosted application should form a very good model to decide on how the hosted applications will be comfortable with the entire process.
  4. Where do we stop?: Now what about group information, user attributes? So basically should SSO manage some or all the information. I think user identity may be first step, but ultimately all the user information may have to be migrated to SSO with generation of the credential that is sent to back end specific to the application. We have SAML in combination with 2 method(login and logout) based integrated authentication modules to thank for that(where are they?). Besides that since the information to be sent to backend have to be specific to application, the product should have a good way of managing this information on per-application basis(I have not seen very good attempts on that side).
  5. What about FIM(Federated Identity Management)?: I think that is a long way into the future. Let the corporate jump on to the band wagon and solve the trust, liability issues before hosting company should jump on this. May be the market will evolve like certificate market where a set of third-party will be trusted agent which will issue proxies(as hosted application, certificates, web service or god knows what) that will be trusted by both most of the parties and these third-parties will consolidate or trust each other over long run. Or the market will never grow beyond the one to one OR consortium based trust. But if third party companies can really take it to next level it will be good for every body.
So, these are my thoughts on the subject. Let us see how things really work out.

Saturday, February 14, 2004

Identity and Access Management - Part II - Identity Management

Before we go too far on the path to understand what its management is about, let us define what identity is.

What is Identity? (I am not Dave, that is just my Name)

Incase you read the link that I provided in Part I, you have the basic idea about how identity has been defined so far as an abstract concept. In order to map this to more real-world scenario I have interpreted the three tier system in the digital world as follows
  • Core Identity: This is the digital representation of an entity in the domain. This needs to be unique in the particular domain and can be a UUID, email-id, employee id, or something that uniquely identifies the user in the domain.
  • "Action" Identity: This defines the identities that the core identity uses to perform its work. So for example the core user can use unix root id or a NYSE trader role. These identities are representation of the core identity in specific resource(s). Typically these identities are used by the resource manager to identify the user. These identities are typically mapped to core identity(or vise versa) during provisioning.
  • "About" Identity : Every identity has some information associated with it (like name, address, and so on). This information helps the resource managers in the domain to understand the core identity better and provide the resource based on the policies defined. I like to categorize this information into the following sets.

What is identity Management?

The basic idea behind identity management is to manage the three tiers of the identities that represent the entity in the domain. The core identity is the basic representation of the entity in the domain. The resource managers (which provide the functionality and data to the entity) due to various reasons may not recognize the the entity as the core identiy but as a completely different user id (for example root id on unix) or by the role the core identity has been assigned (for example unix admin role or NYSE trader role). These identities (the "Action" identity) would help resource manager recognize the user in its context instead of bothering about the core identity. This simplifies the job of resource manager in the sense that it does not need to know core identity of each and every entity to serve the entity. Once the resource manager has recognized the core identity in its own realm, it may need additional information about the identity to make decisions based on the resource manager policies. These decision can be about whether it should give access to required resource or what resources should it serve to the entity and so on... I like to classify this "additional information" in to three types.
  1. Authentication Information - This information is needed by the authentication system of the resource itself or that trusted by resource to make sure that the entity is who it claims to be. This information can be
    • What entity knows (like password)
    • What entity has (like token generator, smart card, certificate)
    • What entity is (like finger print)
    The entity in this case can be a physical entity like user or a logical entity like an application.
  2. Domain Information - The domain specifies that each and every entity's representation has a basic set of information associated with it. This may be information like name, address and so on. The decision about what , is typically made at domain level.
  3. Resource information - This information is relevant only for particular resource only and does not make sense in case of another resource manager or will be used in different context in another resource. For example, the trade limit may make sense for an security trading application, but would not be relevant in a tax application and at the same time, may have different connotation in a forex application. This is typically defined by the resource group itself.
In the next section I will try to define what component typically come into play during the runtime i.e. when entity is interacting with resource manager to get access to resource and during management.


The Identity management system should be able to help the resource manager identifing and authenticating the entity at runtime.
The authentication mechanism can be broken down into two component -
  1. Process: This is the moving part of the system which typically performs the following functions
    • Retrieved the authentication/identification information using the configured procedure. This can be achieved by using a wide variety of ways like Basic Authentication, Forms based, fat clients, CSI-IIOP, Certificate, fingerprint scanner, IRIS scanner, Challenge Response like SPNEGO, SecurID and so on. An important part of the retrieval process is to ensure that confidentiality and integrity of the authentication information is not compromized in the process.
    • Once the information has been retrieved, this information may need to be processed using specific algorithm (like hashing algorithm for one-way password or CRL validation for certificates) before information is in the form that can be compared to information in the database corresponding to entity.
    • Besides that it validates whether the security policies regarding inactive account expiration, authentication information expiry(incase it is not biometric), number of logon trials, time and location of access by entity are being followed.
    • Eventhough not part of core authentication, process generates the authentication audit events as configured.
  2. Database/Directory: The trusted source of the authentication information with keyword being "trusted".
    • The database should be designed so that the integrity and confidentiality of the authentication information can be maintained.
    • The process uses the database to validate whether the information provided by the entity matches the information present in database. Most of the time this distinction about database being separate from process is not made. But it is important to realize that as we move toward SSO a very important strategy may comprise of having Single Database which is shared by different process which themselves are embedded in the legacy applications.
    • These databases can be LDAP based directory, Active Directory, RDBMS, file system, ACE Server database, certificate store, to name a few.
    • Once the distinction between Database and process is well understood, the next idea that should be kept in mind is how process interacts with database i.e. what are the databases that "process" supports, does the "process" provides facility to map the existing data structure/schema to its datastructure or does database schema need to be specific to the "process" being used.
Most of the times in the discussions, some thing that I have found missing is the concept of session. Basically, this is an important part of authentication and authorization but at the same time is not addressed in most of the specification. Typically it may be hard to define the concept of session, but in most of the cases the session can be defined as the duration during which the entity was interacting "actively" with the resource manager. The definition of "actively" is very subjective which may vary from few minutes for a user application's to days for long batch processing transactions. But for most of the applications, the concept of session inactivity timeout and session failover should always be kept in mind while considering the authentication which typically gets tied to session management.
Based on the components that are described above the authentication runtime implementations can be broken down into following categories.
  • standalone component like Web SSO products or Desktop authentication where the authentication of the identity can happen even without the entity connecting to resource manager. In such scenarios the resource manager trusts the authentication mechanism and uses the identity(and additional information) passed to it to construct entity's identity. Incase of the standalone component it is important to understand the mechanism and security behind the transfer of user's identity to the resource manager from authentication process. This transfer can happen via Header variables(in case of Web applications), SAML, Privilege Certificates(PAC) and so on. Besides that resource may use different identity to identify the entity, in which case the user identity mapping would had to be performed by either the authentication mechanism or the resource manager.
  • Integrated component like built-in security module where the authentication happens when user tries to access the resource by contacting the resource manager. This is the most prevelant implementation before the Single Sign On concept came into the picture. The resource manager in this case has a built-in module that provides the identification and authentication facility. In addition to that these component provide the management facility(like identity creation, password reset). It is important that these applications are part of the single point Identity management strategy. Most of the provisioning product support the concept of adapters/connectors which allow you to integrate the identity solutions into these integrated component using component specific APIs or standard protocol. Incase the database of the component is built using a standard RDBMS or Directory, Meta-Directory products are available which can synchronize the information between the products. A very important point to remember is that the identity information can flow both ways i.e. from central repository to resource manager's identity database and vise versa.
  • Shared database This catagory falls between the two approaches described above. A lot of new inhouse applications typically take this approach. It allows the authentication process to be integrated with the resource manager but uses a database that is not under complete control of resource. A good example would be Unix box using LDAP/Kerberos for authentication purpose. The process uses the available database (which may be managed via other process) for validation of authentication information.


The identity management deals with the process of addition/modification/deletion of the identities and associated information. There is nothing new with this concept, and for years the resource managers have provided built-in components that do exactly the same and enterprises have developed systems in their operations department that use workflow applications to manage this process. These systems typically work as follows
A request paper work was submitted or a ticket was create via helpdesk. The help desk/operations department used the workflow product (like REMEDY) to send the ticket to appropriate administrator. The administrator then performed the identity management operation on the application using the application's administration interface.
This approach has two things missing
  1. End-to-end automation Due to the human factor a lot of time the tracking, auditing, accountablitity is not exactly the best thing about the process. So it would really be great to have an end-to-end system that can allows tracking, auditing the complete process and give accurate status of the process. One of the ways is to automate the complete workflow and include all the entities involved in the process(users, their manager, resource owner). This is an important contribution of the latest breed of provisioning systems. This frees the administrators of the dreaded work of reseting password and concentrate on application administration.
  2. User Interaction An important part of the previous workflow systems was dependency on third-parties like help desk/ system administrator for the completion of the work. The new products bring in the concept of self-service where the user can perform the basic administration tasks like password reset, creation of accounts to some systems( once they have basic privilege) without requiring input from third parties. It is very important to design the workflow so that the confidentiation and integrity of the systems is not compromised for example the password reset workflow should be designed so that only the person who is the owner of the account is able to perform the reset(this is typically implemented in variety of ways like using known email id, personal question/answer).
Most of the client being used to concept of ticket tracking expect similar functionality from the identity management systems. Most of the old ticketing systems allowed users to provide free-text input which was meant to be for human administrators. This is I think the biggest hurdle in new systems where the complexity of the ticket that can be generated during the process is very limited and should improve over some time. Eventhough the level of functionality may vary with implementation, most of the products have the following component in some form or other as part of their implementation.
  1. Interface is, understandably, an important part of the identity management. There are two parts of the interface - input and notification. The input basically deals with interface(like web based, fat client, APIs, Web Service, SPML) using which the users and other process can interact with the identity management application to provide input required for the workflow to complete. Most of the workflow have various points at which it needs user input(like approval of a request, additional information) and at those point the IDM application needs to notify the user via different kind of interfaces (like email, lotus notes, groupware, pager, and so on). So it is very important that identity management should have an appropriate blend of the two interfaces.
  2. Delegation is an important part of the operations. This allows the help desk to manage a lot of the basic facilities and free the application administrator from it.
  3. Rule/Policy Engine Most of product support some sort of rule engine. This is important part that helps designing the rules that can be associated with input validation and choice of workflows and so on while implementing complex processes.
  4. Workflow engine This is one of the most basic thing that every identity management product that provides complete solution has. This helps the defining the business process for identity management and automate them.
  5. Trusted Repository Most of the enterprises already have a separate systems that manages the employees(HR ERP), customers (CRM) and so on. These identity must be accepted by the identity management system as trusted identity and hence the concept of trusted repository.
  6. Reconciliation/provisioning Adapters/Connectors Well these are the components that complete the automation. Basically these are the components that connect to the resource managers or its security database and add the identity information to it.
These components typically form part of the identity management systems. Next time I will try to take up the Access/Authorization Management systems.

Saturday, February 07, 2004

Identity and Access Management - Part I Introduction

What Consumer Want or Problem Definition Well sometimes even they don't know! But objectively looking at the problem definition can be stated as :
"Enterprise have a large number of resources that need to be a accessed by a large number of user. With increasing number of resources being accessed by each user and each resource being controlled and managed by business groups, the following is some of the pain each of the party is going through
  1. End User pain: the number of identity that the user need to remember to access each resource is increasing
  2. Management pain: Management does not have a clue(leave alone controlling) on what user has access to on a day to day basis and they have the auditors / compliance officers breathing down their neck.
  3. Operations pain: Operations spending more and more time in correcting the mistakes of the users(like password reset), management(get me report for user access and make sure that all the security policies are followed) and following business workflow and security policies with a possibility of making mistakes due to non-existant end to end tracking system
  4. Developer pain(or is it?): Need to write the same code for managing the user information every time a new application needs identity and access management.
  5. Resource owner's pain : As a resource/data owner they need to be able to control who can access the information while following specific policy for access.
The availability of the identity, access and resource management as a service which can be tapped into by the business groups may be a way to solve everybody pain"
Before we continue on this topic I would like to bring out the basic idea of a security / Identity Domain. Basically, it extends from the very common idea of defining scope of the system. This domain can be a single department, multiple departments, a single enterprise or multi-enterprise. It is very important to define the domain and keep that in mind while understanding the requirements. So basically this needs to be achieved taking in to consideration that there are three aspect of the R.A.ID Management( ;-) ).
  1. Resource: this, at the moment, needs to evolve as a concept. The products have concentrated on the I.T. Resources(provisioning for server, database, ERP or other third party product) and, in some cases, User Data (privacy manager), but at the same time resources can be any asset like IP address, Multicast address, in-house application(and associated data), web services, etc.
  2. IDentity: This is the something that seem to be the center of attraction at the moment. The basic concept being that every physical / logical entity that needs to be identified has to have a unique identifier in the domain. This identifier must then be mapped to all the digital representations in various applications / resources / Tiers / roles in the domain. Typically these identities have associated information which is referred to as user information / attributes, password, users' application data, etc. People have tried to define the concept of identity as three tier models and you can check out the link for complete discussion on the subject. I do not "get it" completely but I will re-visit this topic.
  3. Access Control: This bring together the identity and resource. The most popular model of access control is based on the two component system - Access Enforcement Point and Access Decision Point When ever the user tries to access a resource, the resource manager(entity that interacts with client and serves the resource) or a associated component acts as an "Access Enforcement Point". It needs to know
    "Is User X Granted Permission Y On Resource Z?"
    The "Access Enforce Point" typically defers the question evaluation to an "Access Decision Point" which has the access policies that enable it to make the decision and return back result to caller. A typical policy may be of the following form
    "GRANT|DENY User|{Static or Dynamic Group} X Permission Y on Resource Z if constraint is true"
    I will expand on this concept later(TODO - Add URL). These policies and additional information should be available at the Access Decision Point so that it can evaluate the query and return information.
In the next sections I will take each of the component separately and discuss the concepts around each of them. There would be 3 sections about each component.
  • Concepts This will discuss the basic concepts associated with each of the component and important ideas that should be kept in mind.
  • Runtime This section will discuss the typical components that are come into the play at runtime.
  • Management This section will discuss the typical concepts associated with managing the component