skip to main | skip to sidebar
Showing posts with label NIST. Show all posts
Showing posts with label NIST. Show all posts

Wednesday, January 15, 2014

Ontology Summit 2014 Kicks Off Tomorrow, Jan 16

The topic for this year's Summit is "Big Data and Semantic Web Meet Applied Ontology". The Summit kicks off with a conference call on Thursday, January 16th, at 9:30am PST/12:30pm EST/5:30pm GMT (call details). Here is a short excerpt from the Summit's "Goals and Objectives":
Since the beginnings of the Semantic Web, ontologies have played key roles in the design and deployment of new semantic technologies. Yet over the years, the level of collaboration between the Semantic Web and Applied Ontology communities has been much less than expected. Within Big Data applications, ontologies appear to have had little impact. This year's Ontology Summit is an opportunity for building bridges between the Semantic Web, Linked Data, Big Data, and Applied Ontology communities.
For those of you not familiar with the Summit, it was started in 2006, and there is a different theme each year. It is sponsored by a set of organizations including NIST, Ontolog, NCOR, NCBO, IAOA & NITRD. The way that the Summit works is by a series of conference calls (every Thursday) and lots of email discussion. It is culminated by a face-to-face meeting in late April.

I highly recommend participating, or even just lurking. It is not necessary (or even possible :-) to attend the call every week and to read every email on the ontolog-forum. (Also, it is not mandatory to come to the face-to-face.) If you have to miss something, slides and transcripts of the conference chats, as well as an email archive, are available online. In addition, at the end of the Summit, a "communique" is prepared that summarizes the discussions and work.

I have lurked on the edges of the Summit since the late 2000s, but finally have the time to actively participate. This year, I am co-championing Track A on "Common, Reusable Semantic Content". This topic is near and dear to my heart since I am a strong proponent of reuse. IMHO, it is always valuable (when it is possible) to build on someone else's good work, and benefit from their learnings, rather than starting from scratch. So, when modeling or designing, I look to find something similar and then extrapolate from, or build on it.

Needless to say, I usually don't take other ontologies "en masse", but pick and choose the semantics, patterns or ideas that make sense. How to do this is one aspect of Track A, and there is more. Here are some excerpts from our "Mission" statement:

Semantic technologies such as ontologies and reasoning play a major role in the Semantic Web and are increasingly being applied to help process and understand information expressed in digital formats. Indeed, the derivation of assured knowledge from the connection of diverse (and linked) data is one of the main themes of Big Data ... One challenge in these efforts is to build and leverage common semantic content thus reducing the burden of new ontology creation while avoiding silos of different ontologies. Examples of such content are whole or partial ontologies, ontology modules, ontological patterns and archetypes, and common theories related to ontologies and their fit to the real world ... Achieving commonality and reuse in a timely manner and with manageable resources remain key ingredients for practical development of quality and interoperable ontologies ... This track will discuss the reuse problem and explore possible solutions. Among these are practical issues like the use of ontology repositories and tools, and the possibility of using basic and common semantic content in smaller, more accessible pieces. The goal is to identify exemplary content and also define the related information to enable use/reuse in semantic applications and services. A secondary goal is to highlight where more work is needed and enable the applied ontology community to further develop semantic content and its related information.
I hope that you can join me!

Andrea

Monday, May 2, 2011

Finishing Off the NIST Access Control Survey

I am finally getting a chance to finish the analysis of the NIST survey on access control methods. My apologies ... Somehow, the month of April got away from me ...

The last control model is Risk Adaptive Access Control or RAdAC. It is a combination of attribute and policy-based control (on steroids) with heuristics and machine learning. That last part is what makes it unique, challenging and very exciting.

Saying that attributes include environmental conditions does not seem like rocket-science, but just common sense. It is like saying that no one has permission to enter a building unless they are already identified to the security system. That works great until you need to override the policy because the building is on fire (and the firemen are definitely not already identified). So, including data on the environment (in the set of attributes to be assessed) is prudent.

Next, saying that policy can modify existing rules (making them more lax or strict, or modifying them to co-exist - i.e., de-conflicting them) is "meta-policy" (policy about policy). To me, this is just policy based management - but the targets of the policy are rules themselves.

However, the fascinating bit comes into play when the NIST authors discuss taking "a probabilistic, heuristic approach to determine whether the access should be granted ... The heuristics include a historical record of access control decisions and machine learning. This means that a RAdAC system will use previous decisions as one input when determining whether access will be granted to a resource in the future." I would actually expand that last sentence a bit to say "use previous decisions with insider/outsider threat analysis".

Do IT systems have the necessary data to capture and analyze this information today? I believe that we do. We have cheap storage that can hold extensive log data, sophisticated sensor/management hardware and software, and advanced pattern recognition and analysis software. What we need is more experience and research into the heuristics and strategies to effectively utilize this information, hardware and software.

The NIST paper goes on to highlight the obstacles to overcome to achieve RAdAC. I want to just briefly note and comment on them here:
  • Integration of a wide variety of systems and data - Which is an area where semantics technologies would be very useful (something that I might have said before)
  • Unambiguous definition of digital policies - I would again encourage investigating and building on semantic technologies, such as the Institute for Human and Machine Cognition's KAoS ontology and framework
  • Trustworthy sources of user and environment information
  • Research into machine learning, genetic algorithms and heuristics - Which is discussed above, and ...
  • A broad swath of non-technical challenges - such as the liabilities associated with a security breach made by an automated entity
Although I have worked on policy-based management for many years, I still worry that automated policy will just allow us to make errors more quickly. So, to NIST's list of obstacles I want to add the need to improve testing, test beds and simulation.

Andrea

Thursday, March 31, 2011

More on the topic of NIST and Access Control

Continuing the discussion, NIST and Access Control, I disagree with some of the distinctions between Attribute and Policy Based Access Control (ABAC and PBAC). Most of this disagreement is really in where attribute- and policy-based controls begin and end, and their characteristics, than in the concepts themselves.

In the report surveying access control models, NIST indicated that "one limitation of the ABAC model is that ... there can be disparate attributes and access control mechanisms among the organizational units. It is often necessary to harmonize access control across the enterprise." This distinction of specific silos of attribute-based control versus more enterprise-coordinated (but still attributed-based) control is the distinction between ABAC and PBAC in the report. For me, I consider this all policy-based access control, where the scope of "harmonization" varies (perhaps by conscious decision or priority of implementation).

It seems reasonable that an organization would work to fully harmonize its access to shared data or resources. That word, "shared", is important. For data and attributes specific to one organizational unit (or domain or application), it is reasonable to have individual policies (i.e., there is nothing to harmonize to). However, once data is shared across applications and org units, there is a need to harmonize - since subjects could be granted or denied access to the same data or resources by different units or applications.

So, there is a need to consider attribute-based control in specific areas AND in the enterprise as a whole. And, there is a need to be consistent with respect to the attributes and the policies. This seems true whether you call this ABAC or PBAC.

The NIST report goes on to say, PBAC "requires not only complicated application-level logic to determine access based on attributes, but also a mechanism to specify policy rules in unambiguous terms." This is not controversial, but I would clarify the term "application-level" in "application-level logic".

I agree that you need "guards" related to an application's need for and use of policy. However, you could obtain this by intercepting application/user attempts to access resources and then applying policy. Of course, you likely also have to map the application details into the attributes of your policies. This is where the Semantic Web concepts (such as OWL's SubClassOf, EquivalentClasses, DisjointClasses, etc.) come into play. As for "guards" themselves, there is a good example of this in KAoS Guards, implemented in the infrastructure from Florida's Institute for Human Machine Cognition (KAoS).

The second part of the quoted sentence talks about the need to specify policies in "unambiguous terms". This was one of the key points of my earlier post - one where we need end-user and government/organizational participation. BTW, I will be so bold as to also endorse the ontology work of the KAoS team as a basis for both access control (authorization) and obligation policies (see the entity rendering of their OWL ontology at http://ontology.ihmc.us/policy.html).

So, where do I start to disagree a bit more with the NIST position? It is in the area of an "Authoritative Attribute Source" ("the one source of attribute data that is authorized by the organization"). I cannot argue with taking this as a first line goal. Fred Wettling also pointed this out in his comments to my earlier post ... that there needs to be "a rethinking of the hub and spoke models of access control of the past. This raises the question of where identity-based attributes are stored and how/when they are validated. Can peer-to-peer communications rely on attribute assertions from the peers? In some cases yes. In other cases, third-party validation may be required. In both cases a centralized infrastructure may NOT exist. That is a challenge that may be addressable through some level of standardization of vocabulary, policy, and rule structures."

While I agree that one authorized attribute source is required by some scenarios, it is not mandated in all scenarios or even required for PBAC. As noted above, a common understanding of the semantics of the attributes and their "qualities" (including where and how they are obtained) is needed. This can be managed and mitigated by semantic alignment and policies about the data itself.

Just some more food for thought in the continuing dialog about access control and policy-based security/management/...

Next time, I promise to finally write about Risk Adaptive Access Control.

Andrea

Monday, March 21, 2011

Dialog on "NIST and Access Control", my previous post

I received a notification that the following comment was posted by Fred Wettling (from Bechtel) ... but it does not appear on the web site. So, I am re-posting it as a full entry and then adding a bit of dialog of my own.

My apologies to Fred that the site is not working quite right, and my appreciation for the additional insights! Here is Fred's post ....
Good post, Andrea ;-)

Here are a few initial thoughts...The overall NIST model can be viewed as moving from topology-based control (network boundaries) to policy-based control. RBAC, ABAC, PBAC... appear to be the evolution of how policies are instantiated and the level of granularity required for the target level of security. ACLs, firewall rules, and other topology-based controls will be around for a while and continue to serve as a coarse-grained access control for many hosted services within companies and also cloud providers that have requirements for zone-based controls.

But topologies are changing from at least three perspectives that must be addressed in tomorrow's security.
1. Resources and information are becoming more distributed as we move from a single source of access (monolithic applications) toward a "single source of truth" with mash-ups securely accessing and aggregating multiple authoritative information sources.
2. More distributed information sources including peer-to-peer communications.
3. Location-based services have an implication of geographic or physical context that may be relevant in security decisions.

There's also a needed (and implied) shift in WHERE security is applied. The trend must be toward the target resource or information to operate in the expanding Internet of Things. The implication of these topology-related trends will require changes to how information and resources are secured… and a rethinking of the hub and spoke models of access control on the past. This raised the question of where identity-based attributes are stored and how/when they are validated. Can peer-to-peer communications rely on attribute assertions from the peers? In some cases yes. In other cases, third-party validation may be required. In both cases a centralized infrastructure may NOT exist. That is a challenge that may be addressable through some level of standardization of vocabulary, policy, and rule structures that you mentioned. The challenge, as usual, is frequent vendor perception that product differentiation is achieved through proprietary technology.

I think there needs to be some industry awakening about the value of standard & policy-based access control. There is high-value to organizations to have common policy definitions accessible by PDPs and PEPs provided by multiple vendors.
1. DMTF has done some great work collaborating with others in standardizing policy models.
2. Before it merged with the Open Group in 2007, the Network Application Consortium (NAC) published the Enterprise Security Architecture. Link: https://www2.opengroup.org/ogsys/jsp/publications/PublicationDetails.jsp?catalogno=h071. A team has been established to update the 115 page doc. It has some great content that is relevant to this thread.

NIST could certainly help in pushing this work forward and promoting policy standardization in other standards organization including the two mentioned above. Unification would be a good thing.

Fred Wettling
Bechtel Fellow
And, here are my replies:

1. As for ACLs being around for a while, I totally believe that. However, I am not sure that they will be around out of necessity, but due to the persistence of legacy systems and the reluctance of vendors to move from tested and costly proprietary technologies. I think that we can move fully to PBAC or RAdAC policies, at least at the declarative level. Let me explain ... If hardware and software products understand a "standard" policy ontology and vocabulary, then they can implement/act on declarative policy, as opposed to processing device/OS/software-specific ACLs. Now, a device may only be capable of coarse-grained control or that may be the appropriate implementation for certain environments (for example, "zone-based control" in a cloud), where the fine-grained details are/should not be known. But, a different take on this, from requiring an ACL, is to say that a "standard" rule is translated to coarse-grained protection, as part of the PBAC/RAdAC processing. Even in the worst case of using only ACLs, the policy infrastructure can relate the ACL to the declarative policy that it was designed to implement. Traceability is a wonderful thing!

2. It is noted that a centralized infrastructure may not exist. I agree but also want to allow for centralized policy declaration (for example, from federal, state or local governments, or from a Governance/Compliance Policy-Setting Body in an enterprise) and decentralized, local policy definition and override. On the distribution side, it is necessary to disseminate the broad coverage policy (such as legislation), as well as the rules for how/when/by whom they can be modified. Does this mean "centralized" infrastructure? Not necessarily, but there may/should be "standard" policy repositories. Regarding policy decision and enforcement, there may be security aspects that can/should be processed locally, others that require third party validation, some that rely on sophisticated, controlled policy decision point analysis, etc. ... It eventually comes down to policy on policy. But, in any/all cases, security needs to work when offline/disconnected.

3. Fred lists work by DMTF and NAC. The NAC paper is very insightful and I encourage people to read it. As for the DMTF, they have indeed pushed the boundaries of policy-based management, although I find their model to be complex (due to the use of a query language in defining policy conditions and actions, and the disconnected instantiation of the individual rule components). Also, CIM requires extension to address all the necessary domain concepts and vocabularies, which will not be easy.

4. I was a bit unclear in my reference to NIST and helping with "practices and standards". One of my NIST colleagues pointed out that they do not set industry standards, but support the development of "open, international, consensus-based" ones. In addition, they develop guidance for Federal agencies. This was what I meant, although I did not use enough words. :-) Federal, state, and local governments are the ones that write legislation/statutes. But, NIST could encourage government agencies to publish their policies in a standard format, and that would help industry and promote competition. This was indeed my goal in mentioning NIST - not for them to create a standard, but to help drive one and ensure its utilization by Federal agencies. NIST is in a unique position to help because they are not a policy-setting agency. And, as Fred noted above "the challenge ... is frequent vendor perception that product differentiation is achieved through proprietary technology."

Well, I'm sure that this is too long already. Look for more in the next few days.

Andrea
Subscribe to: Comments (Atom)
 

AltStyle によって変換されたページ (->オリジナル) /