May 25, 2013
Buzz data is going to be moved to Google Drive according to an email doing the rounds. This comment caught my eye though.
"Bradley Horowitz said at the time that the lessons that Google learned from the products short existence would be used in other services like Google+"
So what features that used to be in Buzz do you wish were in Google Plus now?
1) A tab on profiles listing all my comments
2) An RSS/Atom feed of my public posts
3) Vanity Profile URLs for everyone
4) Auto-import of posts from feeds. But to a separate tab on profiles, not to our public streams
5) Easy Location tagging of public posts when using the desktop web interface.
I'm sure there's more.
[from: Google+ Posts
There's an email from Google doing the rounds about the final act for Buzz. Our old posts are going to be archibved to our Google Drive, apparently.
It's prompted me to go back and re-read my burblings from 2 years ago!
"Bradley Horowitz said at the time that the lessons that Google learned from the products short existence would be used in other services like Google+"
I wonder what those lessons were. Because there's still some function that used to be in Buzz but is not in G plus, that I miss.
[from: Google+ Posts
May 24, 2013
Oh no! I should never have agreed to that IoT WebSSO thing
Every time I revoke that diaper token I am immediately signed back in by the craddle.
Paul Madsen posted an excellent article today, “Identity, Application Models and the Internet of Things,” recommending that the prevailing application development model move back to the browser and away from native apps. He references another excellent article by Scott Jenson, “Mobile Apps Must Die,” which holds that because we use so many native mobile apps, they are “becoming too much trouble to organize and maintain,” and that the native app model, “just can’t take advantage of new opportunities.”
Paul observed how, with the prevailing native app model, the “Internet of things would push us to have 1000s of native applications on our devices, but that would place a completely unrealistic management burden on the User.”
I agree that managing large numbers of apps is becoming very burdensome and counterproductive. Each airline I fly has its own app. Each store I frequent has its own app. I have apps upon apps upon apps.
I propose, however, that just focusing back on browser apps doesn’t completely solve the problem, particularly with the Internet of Things. A big problem is the narrow siloed focus of so many apps.
I recently bought a Fitbit device to track all the steps I take and stairs I climb. It is a nice little device that syncs automatically with an app on my iPhone. I can also use that app to record food I eat and water I drink along with the automatic recording of steps and stairs.
However, the app covers only a fairly narrow silo of functionality. If I want to record other vital statistics (e.g blood pressure or blood glucose), it takes another app. If I want to record my workout at the gym with any degree of granularity, it takes another app. Of course, every app has a different concept of my identity. Not good.
Paul’s discussion of a an app to monitor his toaster begs the question – why should I have an app (either web or otherwise) for every device in my house? Doesn’t it make more sense to have a “home management” app that accommodates toasters, fridges, thermostats, smoke alarms or whatever other Internet connected things may be available?
I propose that we need a new app paradigm that retains the great user interface characteristics of native apps, the “just in time” model of discovery and use that Paul and Scott recommend, coupled with a more integrated approach to solving real life, but more complex use cases.
Yesterday, I was introduced to a recently-published 90+ page report, “The Report of the Commission on the Theft of American Intellectual Property.”
The Commission on the Theft of American Intellectual Property is an independent and bipartisan initiative of leading Americans from the private sector, public service in national security and foreign affairs, academe, and politics. The three purposes of the Commission are to:
- Document and assess the causes, scale, and other major dimensions of international intellectual property theft as they affect the United States
- Document and assess the role of China in international intellectual property theft
- Propose appropriate U. S. policy responses that would mitigate ongoing and future damage and obtain greater enforcement of intellectual property rights by China and other infringers
The members of this commission represent an interesting cross section of private and public sector leaders:
- Dennis C. Blair (co-chair), former Director of National Intelligence and Commander in Chief of the U. S. Pacific Command
- Jon M. Huntsman, Jr. (co-chair), former Ambassador to China, Governor of the state of Utah, and Deputy U. S. Trade Representative
- Craig R. Barrett, former Chairman and CEO of Intel Corporation
- Slade Gorton, former U. S. Senator from the state of Washington, Washington Attorney General, and member of the 9-11 Commission
- William J. Lynn III, CEO of DRS Technologies and former Deputy Secretary of Defense
- Deborah Wince-Smith, President and CEO of the Council on Competitiveness
- Michael K. Young, President of the University of Washington and former Deputy Under Secretary of State
The report addresses the huge scale of intellectual property theft – involving hundreds of billions of dollars and huge impact on ongoing innovation:
The scale of international theft of American intellectual property (IP) is unprecedented—hundreds of billions of dollars per year, on the order of the size of U. S. exports to Asia. The effects of this theft are twofold. The first is the tremendous loss of revenue and reward for those who made the inventions or who have purchased licenses to provide goods and services based on them, as well as of the jobs associated with those losses. American companies of all sizes are victimized. The second and even more pernicious effect is that illegal theft of intellectual property is undermining both the means and the incentive for entrepreneurs to innovate, which will slow the development of new inventions and industries that can further expand the world economy and continue to raise the prosperity and quality of life for everyone. Unless current trends are reversed, there is a risk of stifling innovation, with adverse consequences for both developed and still developing countries. The American response to date of hectoring governments and prosecuting individuals has been utterly inadequate to deal with the problem.
The report recommends several short, medium and long term remedies, including public policy, legislation, public/private cooperation and advances in cyber security technology and processes.
In the last category, I was interested to read the following observation (emphasis mine):
Even the best security systems using vulnerability-mitigation measures, including those with full-time dedicated operations centers, cannot be relied on for protection against the most highly skilled targeted hackers. A network exists in order to share information with authorized users, and a targeted hacker, given enough time, will always be able to penetrate even the best network defenses.
Effective security concepts against targeted attacks must be based on the reality that a perfect defense against intrusion is impossible. The security concept of threat-based deterrence is designed to introduce countermeasures against targeted hackers to the point that they decide it is no longer worth making the attacks in the first place. In short, it reverses the time, opportunity, and resource advantage of the targeted attacker by reducing his incentives and raising his costs without raising costs for the defender. Conceptual thinking about and effective tools for threat-based deterrence are in their infancy, but their development is a very high priority both for the U. S. government and for private companies.
The observation that “a perfect defense against intrusion is impossible,” is chilling. What is to be done?
The report’s recommendation to battle this challenge:
Encourage adherence to best-in-class vulnerability-mitigation measures by companies and governments in the face of an evolving cybersecurity environment. Despite their limited utility against skilled and persistent targeted hackers, computer security systems still need to maintain not only the most up-to-date vulnerability-mitigation measures, such as firewalls, password-protection systems, and other passive measures.
They should also install active systems that monitor activity on the network, detect anomalous behavior, and trigger intrusion alarms that initiate both network and physical actions immediately. This is a full-time effort. Organizations need network operators “standing watch” who are prepared to take actions based on the indications provided by their systems, and who keep a “man in the loop” to ensure that machine responses cannot be manipulated.
Organizations need to have systems—software, hardware, and staff—to take real-time action to shut down free movement around the house, lock inside doors, and immobilize attackers once the alarms indicate that an intrusion has started. Some government agencies and a few corporations have comprehensive security systems like this, but most do not.
The bottom line is that Intellectual Property espionage is a huge problem with no simple solutions. Technology alone cannot solve the problem. There are major social, political, economic and cultural challenges that must be addressed. But we in the information security business have our work cut out for us.
Centrify has an extensive track record and experience in developing and delivering solutions that are in the critical path of operations in the largest data centers in the world. We have invested a considerable amount of time and resources extending this same trust level to the Centrify Cloud Service so that customers can rest assured that they are receiving a verified, secure, highly available and trustworthy service. In this blog post I want to discuss some of the recent investments we have made to this end.
In a blog post entitled 'Mobile apps must die', Scott Jenson argues that the Internet of Things (and the associated implication of having to interact with all the 'things') will make the native application model impractical, and push application development back to the browser.
I buy the argument, will repeat some of it and will try to tease out some of the identity implications.
First, a bit of a recap of Scott's argument (or my interpretation at least)
So the Internet of things would push us to have 1000s of native applications on our devices, but that would place a completely unrealistic management burden on
the User –
installing, authenticating, sorting, updating, & deleting of applications when no
- Whereas on a desktop we
might have had ~10
installed apps, on a phone or tablet we might have ~100. Users have to manage this list. It is trivially easy to install apps
from the app
stores. That’s great from the app developers PoV, it minimizes the
installation and so allows for Users to play and experiment. But
from the Users PoV
there is a price to be paid for easy experimenting – the application remains. SSO
between these apps helps but the problem is bigger
- Offline mode will become an
as connectivity becomes ubiquitous. ‘CEO on a plane’ will
disappear as an
important use case when every plane has wifi. Consequently, the
native has over browser models with respect to supporting offline
storage will become less relevant.
- As more and more objects
become connected (IoT),
the nature of mobile applications
(through which we’ll interact with those objects) will have
accordingly. When my fridge, dryer, furnace, air conditioner,
thermostat etc are all connected and desperately want to interact
with me – do
I want a unique app for each of them? And what about objects
outside the house
– coke machines, point-of-sale terminals, bus stops schedules, restaurant menus, gas pumps etc
The problem is that the current native
application life-cycle looks like
This sequence places a heavy burden on the user
and is very
static – not particularly applicable to a ‘Just in time’ model (as Scott puts it) where
interact with an application once and never again.
Clearly this isn’t viable in an IoT world where
constantly be presented with previously unseen connected objects.
our days installing apps and by the time we were ready to
opportunity will have passed (somebody else would have grabbed the last Dr
IoT demands an application interaction model that is far more dynamic, something
- Sense – my device must be
constantly on the
lookout for IoT connected objects and, based on rules I’ve
defined, determine whether & how best to interact with them
- Notify – based on rules
I’ve defined, prompt me to know that I can now interact with the object
- Authenticate – the object may need to know who I
am, but this obviously has to be seamless from a UX PoV. (the
object may have to be
able to authenticate to me as well)
- Use – I interact with the
object. This can’t
require an ‘install’, instead whatever unique application
functionality must be
downloaded and run in an existing app designed with this sort of
ie a web page running in a browser
- Cleanup – as there was no
install, there are no
artifacts (except perhaps some state to simplify the next interaction) to be
cleaned up, ie
The Internet of Things would appear then to be pushing us towards a future where
The last can be summarized as
- The pendulum swings back to the
browser (& so HTML5
comes into its own)
- The importance of browser means
Web SSO remains
- For Web SSO, SAML gives way
from APIs offered up by the 'things' (or network endpoints on their behalf)
- SSO (in the sense of
facilitating seamless user authentication to all the various IoT objects) is absolutely
IoT won't scale without SSO to the T.
Imagine I have a smart toaster that I want to interact with
on my phone to determine if I need to empty the crumb tray (this needs to happen Science!!)
This diagram is a really rough attempt at a model
- does the toaster advertise its presence to the phone?
- is the user invited to interact with the toaster?
- toaster data (crumb tray status etc) get sent to the toaster
cloud for analysis
- toaster data (crumb tray status etc) get sent
to the phone for display
In Mike Small
A recent report commission by CA Technologies Inc. looks at the growth of the use of cloud services and the evolving attitudes to the security of these. This report shows some interesting findings: For instance: Europe is catching up with the US, with “38% of the European respondents using cloud for two to three years.” As compared with “55% of the companies in the US have been in the cloud for three or more years”.
This finding is confirmed by the recent announcement by salesforce.com that they have signed an agreement to establish European data centre in the UK in 2014. According Marc Benioff, Chairman and CEO, salesforce.com “Europe was salesforce.com’s fastest growing region in our fiscal year 2013, delivering constant currency revenue growth of 38%”. This same press release includes a forecast from IDC “that Europe’s public cloud software market will grow three times faster than other IT segments, at a CAGR of 30% to reach €23.9 billion by 2017”.
One of the reasons for opening the European data centre given by salesforce.com at their Cloudforce event in London on May 2nd was to answer security concerns of EU governments and organizations relating to the location of their data. While security concerns remain a key issue for organizations adopting the cloud – the CA Technologies Inc. report discusses the “Security Contradiction”. According to this report “Ninety-eight percent of enterprises surveyed reported that the cloud met or exceeded their expectations for security”. At the KuppingerCole European Identity & Cloud Conference held in Munich May 14-17, 2013 one session given by UK organization described that they had moved to the cloud primarily for security reasons.
So – according to these reports – it would seem that the cloud is blossoming in Europe and that customers believe that cloud providers are coming good on their promises around security. However our advice remains “Trust but Verify” – using the cloud inherently involves an element of trust between the organization using the cloud service and CSP. This trust must not be unconditional and it is vital to ensure that the trust can be verified. Organizations still need to have a fast reliable and risk based approach to selecting cloud services as described in our Advisory Note: Selecting your cloud provider – 70742.
May 23, 2013
Good interview with the Ginger Twat, or Tony Carter as he's better known. Familiar to anyone who watches motorcycle racing on Eurosport.
[from: Google+ Posts
Companies experienced in cloud computing say it is proving to be even more successful than originally anticipated. That is, according to a newly published report, TechInsights Report: Cloud Succeeds. Now What?, based on research by Luth Research and Vanson Bourne. The study talked to 542 senior IT leaders at companies in North American and Europe with at least $500 million in revenue which had...
How should YouTube - Google Music - G All Access integration work? Given that YouTube is now one of the foremost ways of consuming music and especially very new, preview and obscure musics.
This question was prompted by Last.FM integrating videos from Muzu into their system.
[from: Google+ Posts
Last Friday afternoon, at the invitation of Doug Brunke of GrowthNation, I was privileged to attend a private showing of the SolarImpulse airplane during its stop in Phoenix along its Across America tour.
What a delightful experience! More than just a fun scientific excursion, to me this was a celebration of innovation, dedication and profound enthusiasm for conquering the impossible. Bertrand Piccard, co-founder and chairman of SolarImpulse has stated:
Adventure is not necessarily a spectacular deed, but rather an “extra-ordinary” one, meaning something that pushes us outside our normal way of thinking and behaving. Something that forces us to leave the protective shell of our certainties, within which we act and react automatically. Adventure is a state of mind in the face of the unknown, a way of conceiving our existence as an experimental field, in which we have to develop our inner resources, climb our personal path of evolution and assimilate the ethical and moral values that we need to accompany our voyage.
The solar powered airplane, with a wingspan of 208 feet, uses 2,000 square feet of solar panels to power its flight and charge its batteries, so it can fly both during the day and at night. It completed a 26 hour day and night flight in 2010. A second generation aircraft, currently under construction, is scheduled to attempt an around the world flight in 2015.
Besides viewing the airplane and talking to engineers who were preparing for the next leg or its journey to Dallas, Texas, we were addressed by Dr. Piccard and the second pilot, André Borschberg, “an engineer and graduate in management science, a fighter pilot and a professional airplane and helicopter pilot, is the co-founder and CEO.” I found their messages challenging and enlightening. I applaud their innovation and tenacity.
Several photos I took during the tour have been uploaded to SmugMug if you would care to take a look.
Just seen in a comment:
Fossil fuels are made from once living things and burning them releases their joy back into the world
Think about it. All that carbon locked up in fossil fuels is unavailable to photosynthesis and hence life's food chain. We're setting it free to be turned back into life. What's a few millennia and extinctions between friends?
Which raises the question. If the industrial and anthropogenic total use of fossil fuels and other resources wipes us out, how many millions of years would it take to lay down another fossil fuel supply and for another intelligent animal to appear to make use of it?
[from: Google+ Posts
How to explain how we got here?
- I wrote a blog post called Please Send Wicked Simple Email inspired by the jaw-dropping great messages T.Rob Wyatt was sending to the VRM (Vendor Relationship Management) and personal cloud mailing lists. I lobbied for T.Rob’s thoughts to be going onto his excellent blog for easier & longer term sharing.
- Today T.Rob does just that and puts up a killer blog post about why we need VRM from a privacy and personal data rights standpoint that argues the case as strongly as anything since John Kelly’s killer talk on personal clouds at Gartner Symposium or Doc Searls book The Intention Economy.
- I read T.Rob’s post and realize he’s nailed it so well that he explains exactly why we needed to define the Respect Trust Framework before we could build the Respect Network.
Here is the paragraph where T.Rob nails it:
VRM, or Vendor Relationship Management, is a new approach to conducting business in which the missing physical constraints [for protecting privacy and personal data] have been replaced by technological and policy constraints that restore the balance of power between individuals and their vendors, and perhaps to some extent also their governments.
Now read this purpose statement from the first line of the Respect Trust Framework:
The purpose of the Respect Trust Framework is to define a simple set of principles and rules to which all Members of a digital trust network agree so that they may share identity and personal data with greater confidence that it will be protected and only used as authorized.
Separated at birth…and I don’t know if T.Rob has even seen the Respect Trust Framework.
Given the depth of his knowledge and research, I wouldn’t be surprised—I just haven’t heard him mention it yet. But no matter—he came to exactly the same conclusion as those of us founding the Respect Network: the privacy-invading technology genie is out of the bottle and there’s no stuffing him/her back in. So the alternative is to “restore the balance of power” a different way, with an opt-in network where everyone agrees to play by a new set of rules.
I can hardly wait to get the network fully operational—all I can say is that the 24 Respect Network Founding Partners are working like mad to get there. If you want an in-depth progress report, come see us at the next Internet Identity Workshop coming up in Mountain View May 7-9.
KuppingerCole has bestowed the KuppingerCole European Identity Awards since 2008 in recognition of excellent projects in the area of Identity and Access Management (IAM), GRC (Governance, Risk Management, and Compliance), and Cloud Security. This report gives a brief overview of the project performed at EVRY ASA, a leading IT system integrator and service provider based in Norway.
The BIFROST Cloud Security platform...
I missed this post on Life Management Platforms (LMPs) by KuppingerCole a year ago. Martin writes:
Life Management Platforms will be among the biggest things in IT within the next ten years. They are different from “Personal Data Stores” in the sense of adding what we call “apps” to the data stores and being able to work with different personal data stores. So they allow to securely working with personal data by using such apps which consume but not unveil that data – in contrast to a data store which just could provide or allow access to personal data. They thus are more active and will allow every one of us to deal with his personal data while enforcing privacy and security. Regarding “Personal Clouds”, that might be or become Life Management Platforms. However I struggle with that term given that it is used for so many different things. I thus prefer to avoid it. Both today’s personal data stores and personal clouds have a clear potential to evolve towards Life Management Platforms – let’s wait and see. I’ve recently written a report on Life Management Platforms, describing the basic concepts and looking at several aspects like business cases. This report is available for free.
The key sentence is:
“They are different from “Personal Data Stores” in the sense of adding what we call “apps” to the data stores”
I agree with this need to add apps to PDSes. This is similar to what we worked on back in the heady Information Card days. In 2009 when we saw that getting adoption of pure InfoCards was going to be a long march, we tried to add the concept of “apps” to the data held in the InfoCard (or better yet “pointed to it by an r-card). We called them app-cards–a superset of InfoCards. We decided to integrate Kynetx‘s KRL technology to implement these apps. Which means the apps got executed in the cloud.
May 22, 2013
As I reviewed news stories about the tragic Oklahoma tornado, I couldn’t help but notice the stark contrast between a photo taken from far away and one taken up close and personal. The first photo is from NASA: “The image was captured on May 20, 2013, at 19:40 UTC (2:40 p.m. CDT) as the tornado began its deadly swath.”
The second is from a CBS News account on the day the storm hit: “A child is pulled from the rubble of the Plaza Towers Elementary School in Moore, Okla., and passed along to rescuers Monday, May 20, 2013.”
My thoughts and prayers go out to the people who are struggling to cope with the aftermath of this huge disaster. How wonderful to hear stories of the many, many people who are giving personal, selfless service to help the good people of Oklahoma.
I like the diagram Mark O’Neill of Vordel put in a recent post, “Identity is the New Perimeter.” That phrase has been floating around for some time, but I think this diagram illustrates the concept in the simplest, clearest way I have seen:
The article does a good job of describing this new way of looking at security. As Mark mentioned in the post, Bill Gates once said, “security should be based on policy, not topology.”
In April 2013 McAfee announced the addition of Identity and Access Management solutions to its Security Connected portfolio. The products that were previously developed and sold by Intel include McAfee Cloud Single Sign On and McAfee One Time Password. In addition to the products McAfee also introduced the new McAfee Identity Center of Expertise, staffed with experts in identity and cloud security. That free service will assist users with support pertaining to identity and access...
Today, I read an interesting white paper, “Big Data in M2M: Tipping Points and Subnets of Things,” published by Machina Research. From the introduction:
This White Paper focuses on three hot topics in the TMT space currently: Big Data and the ‘Internet of Things’, both examined through the prism of machine-to-machine communications. We have grouped these concepts together, since Big Data analytics within M2M really only exists within the context of heterogeneous information sources which can be combined for analysis. And, in many ways, the Internet of Things can be defined in those exact same terms: as a network of heterogeneous devices.
The white paper does a good job of exploring the emerging trends of the Internet of Things, potential business opportunities and challenges faced.
As one could expect, “authenticity and security of different kinds of data,” was identified as a big challenge:
Big Data is about “mashing up” data from multiple sources, and delivering significant insights from the data. It is the combination of data from within the enterprise, from openly available data (for example, data made available by government agencies), from data communities, and from social media. And with every different source of data arises the issues of authenticity and security. Machina Research predicts that as a result of the need for data verification, enterprises will have a greater inclination to process internal and open (government) data prior to mashing-up with social media.
The following diagram shows the increase security risk as more data from external sources is collected and analyzed.
This yet another indicator of how Identity and Access Management will be critical in the successful evolution of the Internet of Things.
May 21, 2013
Last week, I introduced my favorite topic—digital context—and laid out a plan for how to consider the case. Today, we’ll dive in with a real-world example, looking at how freeing context from across application silos helps us make more considered, immediate, and relevant access control decisions. For those of you who have been following along (and thanks for sticking with me in my madness), this is blog 8 in response to Ian Glazer’s provocative video on killing IAM in order to save it. And if you haven’t been with me from the beginning: I’m in favor of skipping the murder and going straight to the resurrection. Those of you who are coming in late to the game, here’s the recent introduction to context, or you can catch up with the entire story in order here: one, two, three, four, five, six, seven.
It All Starts with Groups: The Simple, Not Especially Sophisticated Solution
Let’s start first with the notion of groups and their implementation. On the surface, nothing could be more straightforward: If I have to manage a sizeable set of users and assign them different rights to applications, I need to categorize those users into groups with the same profile, whether that’s by function, role, need to know, hierarchy, or some other factor. This is the simplest approach to any categorization, creating some “relevant” labels, then assigning people that fit within those label to define groups.
So let’s say we’re creating groups based work functions, such as sales, marketing, production, and administration. All we need to do is list all the people under a particular function, create a label, and then assign this label to those people. Couldn’t be easier, right? The simplicity of the process explains the huge success of groups—and although we implementers tend to make fun of groups as crude categorizations, I would guesstimate that at least 90% of our authorization policies are still implemented through groups. (So much for all that talk about advanced fine-grained authorization! But I’m getting ahead of myself here…)
In fact, we’ve become so dependent on groups that in many cases, especially with sizeable organization where the business processes are quite refined and well managed, we’re seeing that there are often more groups than users! At first glance, this seems paradoxical—after all, what’s the point of regrouping people if you have more groups than people? But the joke is on us technical people because we ignored another key reality: the business one. Sure, we could have a lot of people, but generally a well-managed and productive organization can have more activities (or different aspects of a given activity) that require the multiplication of those groups. So we gave our users a simple mechanism to categorize people into groups, and they used it—talk about being a victim of our own success!
Basically, we played the sorcerer’s apprentice and our simple formula yielded a multiplication of groups, which quickly became un-manageable. So we went back to the formula and started to tweak it, creating groups inside groups, hierarchies of groups, and nested groups; introducing Boolean operations on groups; aggregating them into roles, and so on. So what we were just saying about groups being simple? Simple for whom? Simple for the group implementers—yes, definitely. Simple for a user in charge of the initial creation of the group—sure. But add any complexity into the mix and the chaos begins.
So Much for the Digital Revolution: Every Change, Managed Manually
From a computer’s point of view, the assignment of a user to a group is totally opaque—just an explicit list entered by the person in charge of creating the group. This explicit list contains no information about why or how a user is dispatched into or associated with a group. In short, the definition of membership rests with the group owner, which is fine on the face of it. But that excludes any automated assignment of a new member to the group without manual intervention of the group owner. That means every change must be entered by hand—imagine the complexity as people constantly change roles and shift responsibilities. And imagine how easy it would be for an overworked manager to miss removing the name of the person she just fired from just one of the groups he was part of. Now imagine the security risk if that guy’s still got access to sensitive files.
Without explicitly externalizing those rules, those policies, the administration of the system becomes tied to the group owners/creators. The effort of sub-categorizing with nested groups or introducing more flexible ways to combine groups by using Boolean operators just reveals the root of the problem: When you give users better ways to characterize their groups, you are forcing those users to either make explicit the formation rules of their groups—or continue to make every single change manually, even as those changes become more complex and unmanageable.
And that’s how we (re)discovered the value of attribute-based group definitions.
Machine-Readable Groups: Using Attributes to Simplify Management and Make Policies Explicit
We realized that if we wanted to automate, to simplify the management of all these groups, we needed to describe them at the lowest level as the set of attributes that defined a given group, role, and—yes—context. We discovered that groups and policies can be managed in a more finely-grained manner with increased automation (and greater productivity!) if we characterized them as a set of attributes, combining them with the usual arsenal of Boolean expressions and functions. Basically, we needed an explicit computer representation of this characterization, instead of leaving such definitions in the head of an overtaxed administrator, hoping that auto-magically our human semantic would be interpreted and executable by our machines.
So we looked at how we represented those policies, groups, and roles and saw that an attribute-based system was a necessary condition. But unless we go further with this the analysis, we run the risk of oversimplification, of coming up with a solution that’s simplistic, instead of elegantly simple—and that would only create another set of problems down the road.
So we could keep all the elements—group, subgroup, etc.—as separated “entities” and link them to a person, as in the first example above. Or we could fuse them together with the definition of a user, as we’ve done in second example. After all, both implementations can technically yield the same categorization, meaning you can get to the definition of the groups and subgroups you need with the right members in both solutions.
But semantically, we’re not talking about exactly the same thing. In one case, we have a notion of groups and subgroups separated from the definition of the person. In the other, we’ve bolted those groups and subgroups on as attributes of that person. So which one is the right definition? That all depends on what you need in your representation—by which I mean it’s contextual—but it’s very important for us to fully grasp the difference. The decomposition into attributes is key for fine-grained authorization, but unless we have a clear understanding about what we are doing, we can take the decomposition too far. In such a case, the world becomes a chaotic set of attributes, where we can’t see the forest for all those trees. While we can peer into a universe made up of the most elementary particles, most real-life problems demand that we recompose that world by gluing all those objects back together again.
Breaking It Down and Building It Back Up, Better Than Before
And that is where we begin to see the need to not only decompose the world into attributes, but also to reorganize that world into objects, relationships, and context. What you get through this reorganization of your information representation is a more complete view of your system, where authorization can be enforced in a more granular way. This is the way we really intend to do it in our policies, as we would define them in natural language—and that’s exactly what we’ll be looking at in my next blog post.
So thanks for reading this introduction to my favorite topic, and be sure to check back for a deep dive into objects, relationships, and context. I’ll even show you how a marketing coordinator and a computer can learn to speak the same language!
The post From Groups to Roles to Context: The Emergence of Attributes in Authorization appeared first on Radiant Logic, Inc
We covered the key role of attributes in my last blogpost, moving from the blunter scope of groups and roles to the more fine-grained approach of attributes. Now we’re going to take this progression a step further, as we narrow in on my favorite topic: digital context. (If you haven’t already, check out my first two posts on context, where I laid out the roadmap and looked at groups, roles, and attributes.) Our first order today is to travel back to logic class and think about predicates.* But Michel, you’re thinking, what does all this have to do with digital context? Well, one way to describe a context about something is to express it using sentences related to the question. While we will come back to the definition of context in a following post, for now let’s just say that we need some building blocks to express facts about the world, some form of sentences that can be interpreted by a computer, and logic is one of the tools for that.
Subject-Predicate-Object: First Order Logic 101
In my most recent post, we saw how the notions of groups and roles ended up in the increased use of attributes as a way to categorize or define identities. This should not be surprising. Behind this use of attributes lays a fundamental mechanism—a way to represent a simple fact. And it’s the same mechanism that we use when we reason based on the rules of formal logic, which has been in practice forever, or when we represent a fact on a computer (think SQL). In fact, one of the greatest achievements of the early 20th century has been the formalization of logic (needed for mathematic foundation) and computation. This type of logical representation is core to everything we do, as reasoned thinkers and as computer scientists.
But in case you’re a few years removed from logic class, let’s examine this mechanism at work by looking at some very simple diagrams about what we are doing when we associate some attribute with a person or an object, such as assigning a person to a group:
Or assigning a subgroup to a group:
Each of these constructs can be summarized by the following diagram:
In this diagram, a fact can be asserted by the notation: subject-predicate-object. In predicate logic (AKA first order logic), it’s conventionally written as predicate(X,Y), where the variables X and Y could be themselves objects (references to entities) and/or values (arbitrarily “quoted” labels belonging to the initial vocabulary of our logic system). For instance, in our example above, the fact that “Jane is member of the product marketing group” can be written as memberOf(“Jane”,”Product Marketing”) and subGroupOf(“Product Marketing”,“Marketing”).
These kinds of predicates are called “binary” predicates and they are quite common. So if there are binary predicates, the astute reader (that’s you!) might well wonder if there are also unary predicates and, more generally, n-ary predicates. Indeed, the unary predicate exists and generally it’s used to assign a label to an entity—so if we want to say that Jane is an executive, you would write it as executive(“Jane”). As for the n-ary predicate, well here’s where you will find the usual “n-slots” notation of entities/tables as they’re used in the relational/SQL world. So we’d see something like this: age(“Jane”, “33”) or employee(“Jane”, “33”,”product marketing”).
Now, if you look at all those diagrams above, you’ll notice they have a direction, an orientation that tells us which entity plays the role of subject, since the object for a given predicate cannot generally be substituted. This translates into a given order for the different slots of a predicate; for example, in the notation age(“Jane”, “33”), the first slot—“Jane”—is for the person, and the second—“33”—is for her age. Of course, there are always exceptions where the slots are permutable, such as the “brother binary predicate,” where if x is a brother of y—brother(“x”,”y”)— then y is also a brother of x, which could read: brother(“y”,”x”)= brother(“x”,”y”). But in general order, orientation matters.
The diagrams above form directed graphs and the orientation is essential for preserving the semantics of this representation. After all, saying that x kills y—Kill(“x”,”y”)—is very different from saying that y kills x—Kill(“y”,”x”)!
Essential Semantics: Describing Our World in First Order Sentences
So all this is great, but what does it have to do with context? Stay with me here…we’ve seen that when we reduce everything into attributes, we are reducing the world to first principles. But at the same time, by associating attributes to an entity and recombining them progressively through predicates, we are describing a complete world based on “sentences” of first order logic. If you combine those sentences with the usual Boolean operators (Not, And, Or, and the rest of the derived Boolean Zoo members), you get a world that’s pretty complete—complete enough to act as the foundation of mathematics.
And the good news here is that this world is also pretty close to our own “world of discourse” (albeit a lot like my English: awkward and somewhat robotic). Basically, it’s made of simple sentences in the form of subject-predicate-value (where the predicate is the adjective or qualifier), or subject-attribute-object (where the attribute is the verb). Remember our friend Jane from above? Here are some things we know related to Jane:
Jane is member of marketing group.
Product marketing is subgroup of marketing group.
The beauty of the predicate representation is that a huge part of our digital world is already encoded this way. In fact, all of our so-called “structured information”—databases, transactions, etc—runs according to these principles. But the maze of protocols and security representations we’re all dealing with, from SQL, to LDAP, to APIs, to programming languages, has long masked this reality. We need a way to rise above this modern tower of Babel, a way to translate all that structured, transactional data into something more useful, more contextually-driven. In my next post, I’m excited to show you that we’ve done exactly that: returned to first principles to deliver a “contextual and computational language” that’s as easy to interpret at the human level as it is to execute at the machine level. And this is a huge leap forward. We know we can’t teach our marketing teams to think like machines—and believe me, I’VE TRIED—but imagine a world where a business person and an application can both understand, and act on, the exact same notation. Such a world is possible today…so do not miss my next post!
PS: Some of you have been in on this series from the beginning, but all this blogging began as a response to Ian Glazer’s video on killing IAM in order to save it. For those of you just joining the story, you can catch up with the entire story here: one, two, three, four, five, six, seven.
*See what I did there? That was for all the mathematicians…and for Anil John, who’s just as a big a logic geek as I am.
The post Attributes, Predicates, and Sentences: The Building Blocks of Context appeared first on Radiant Logic, Inc
In Dave Kearns
Another European Identity (and Cloud) Conference has come and gone, and once again it was an exciting week with packed session rooms, and excellent attendance at the evening events. I’m not sure we can continue to call it the “European” Id Conference, though, as I met folks from Australia, New Zealand, Japan, South Africa and all over north and south America. And lots of Europeans, also, I should note. Nor were the attendees content to sit back and soak it all in. At least in the sessions I conducted there was a great deal of give and take between the audience and the speakers and panelists. Most good natured and looking for information but – occasionally – it got a bit raucous.
The track on authentication and authorization – so near and dear to my heart – drew a standing room only crowd who were eager to join in the discussion. As always when AuthN is discussed, passwords drew an inordinate amount of the discussion. I reminded the panelists and the audience that no less a personage than Bill Gates predicted the “death of passwords” back in 2004. And that even within Microsoft, passwords were still in use.
Too much energy is being spent of both trying to remove username/password from the authentication process and in trying to “strengthen” the passwords that are used. Neither approach is going to be effective. Passwords, or the “something you know” are far easier to use than “something you have” (security token) and far less scary than “something you are” (biometrics) for the general public to ever entertain the idea of switching.
Password strength is, essentially, a myth. Brute force attacks become quicker every day, so hacking the password directly becomes easier every day. Phishing attacks are getting so sophisticated that there’s no need to hack a password (and possibly set off security alarms) when you can induce the user to give it to you willingly.
Two factor authentication (2FA) had some champions, but most methods have already been shown to be vulnerable to either direct attacks (man in the middle style, or MIM) or the same phishing attacks that subvert “strong” passwords. The object of the phishing attack is, after all, for the user to login with their credentials which are then subsumed by the hacker. So go three factors if you want – it’s not much stronger.
I found widespread agreement (with a few diehard holdouts) for a context-collecting risk-based system for Access Control (which I’ve called RiskBAC). Knowing the who, what, when, where, how and why of the authentication ceremony leaves the username/password combo as only one of many factors (the who). In fact, entering a username and correct password isn’t the end of the authentication but merely the trigger to begin the Risk-based Access ceremony or transaction. The other factors are all gathered automatically through system dialogs after the entry of the password has identified the account to which the claimant wishes access.
Of course, once we’re satisfied that the claimant is most likely who he/she claims to be, we then take that information into account along with the other contextual elements to determine the degree of access we’ll authorize to the resource they’re seeking.
While the presentation was called “the Future of Authentication and Authorization,” I did remind the audience that over 2000 years ago the Romans used the same methods for access control. Biometrics (what you are) was represented by facial recognition, tokens (what you have) by scrolls sealed with the leader’s ring (early use of a security signature) and passwords were, well passwords – and often changed daily to guard against leaks of the information, something more of us should do today.
There was also a contextual element to the access control ceremony when the guard, on observing the claimant, was able to identify him in the context of where he knew the face from – the morning roll call, or the guardhouse. The sealed scroll had context based on what the guard knew about the location (at the camp or thousands of miles away) and condition (alive and kicking, or breathing his last) of the official who sealed the token.
There were lots of other exciting moments – even aha! Moments – in the tracks I did on Trust Frameworks and Privacy by Design as well as in others’ session especially those on Life Management Platforms, a coming technology that many who were hearing about it for the first time agreed will be game-changing when it arrives – and that may not be too far off. If you’d like to catch up, see the just released Advisory Note: “Life Management Platforms: Control and Privacy for Personal Data” (#70745).
And there was exciting, non-Identity related, news as well. We of course announced EIC 2014 for next May but – remember up at the top of this post I said that it was a larger than European conference? Well we also announced EIC 2014 London, EIC 2014 Toronto and EIC 2014 Singapore. EIC is going worldwide, and the people involved in identity couldn’t be happier. Dates for the new venues haven’t been finalized yet, but I’ll be sure to tell you about them when they are.
Many of WAYF's identity providers are unable to deliver e-mail addresses for their users. The reason is that many institutions no longer run e-mail systems of their own, and so are no longer able to deliver this kind of information. As a result, WAYF now changes the official status of the mail attribute, from MUST to MAY. WAYF thus no longer guarantees its connected services the delivery of a valid e-mail address for every user attempting to log in.
Mit dem Garancy Access Intelligence Manager hat die Beta Systems AG eine neue, spezialisierte Lösung für die Analyse von Zugriffsberechtigungen auf den Markt gebracht. Wie der Produktname schon sagt, handelt es sich um eine Lösung für „Access Intelligence“, einen Teilbereich von IAG (Identity and Access Governance). Access Governance-Lösungen bieten üblicherweise bereits integrierte Reporting-Funktionen, um die gesammelten Informationen über...
After the recent wrestling match in the blogosphere that included vendors and analysts on XACML, I want to provide some best practices for access control/authorization.
The wrestling match is covered in my earlier post
Let me insert my favorite punch line before I mention the best practices.
Authentication is finite while Authorization is infinite.
Best practices for access control:
1. Know that you will need access control/authorization.
Too many times architects spend majority of their system security design time on authentication and federated identity. This leads to limited time provided to authorization. Compared to authentication, authorization can get very complex over time.
2. Externalize the access control policy processing
You are headed toward disaster if your access control processing is embedded in your application. This is because access control requirements are never complete during the first phase of application development. Authorization rules or requirements change over the application lifecycle as business needs or environment change. If the access control processing is not decoupled from the application, you will face hardship. Lots of band-aid will be applied to the application code to meet the changing/ever-growing authorization requirements.
3. Understand the difference between coarse grained and fine grained authorization
Google/Bing will help you understand the difference. Wikipedia will definitely help you here. Application designers tend to create a model of authorization (for simplicity) during initial design. Almost always, this model tends to be a simple coarse grained authorization model. The challenge is that the read world authorization needs for your application is not set in stone. It is an ever changing phenomenon that will just pull your model in all directions.
4. Design for coarse grained authorization but keep the design flexible for fine grained authorization
This goes in line with item 2 where the access control policy has to be separated or decoupled from your application. If your initial design for the access control system or library is designed for coarse grained authorization, because of the low coupling, it becomes easier to incorporate fine grained authorization logic over time.
5. Know the difference between Access Control Lists and Access Control standards
Access Control Lists (ACL) are pretty popular among system designers. The challenge is that they are proprietary and not usable across applications or domains. You may earn your bonus or accolades using ACLs in your application. Over time, they tend to become restrictive due to changing requirements.
There are 2 prominent access control standards that I list here:
a) IETF OAuth2: this is a REST style Internet Scale lightweight resource authorization framework.
b) OASIS XACML: standard for fine grained authorization. Has an access control architecture namely PEP (Policy Enforcement Point), PDP (Policy Decision Point), PIP (Policy Information Point) and PAP (Policy Administration Point).
|Fig: Typical XACML Fine Grained Access Control Architecture|
6. Adopt Rule Based Access Control : view Access Control as Rules and Attributes
Access Control should be viewed as rules on various entities (and their attributes) involved in the authorization check.
I am not forcing you to use XACML. But I would certainly encourage you to design your access control system in terms of rules and attributes. Have a look at my article on Access Control Strategies
. It is critical that you design your access control system as rules and attributes.
Hey, Drools based access control system is certainly not bad as long as you decouple the access control system. It is a trade off between proprietary rigid ACLs and flexible fine grained XACML. You can manage your Drools Rules via Guvnor.
7. Adopt REST Style Architecture when your situation demands scale and thus REST authorization standards
With the growing demand for web based services and APIs and the proliferation of mobile devices in the world, it has become essential to incorporate REST style architecture to your system design.
It is essential for you to use OAuth2 standard for REST authorization. While OAuth2 takes care of defining the tokens and some rules for authorization (scope of authorization and actor/resource), it may still be essential for system architects to incorporate fine grained authorization. Certainly give a look at the REST Profile of XACML v3. There is also JSON binding available.
8. Understand the difference between Enforcement versus Entitlement model
Prominent access control strategies and standards involve the Enforcement model. The access control system is trying to enforce access to a resource. This leads to a Yes/No type question. The enforcement model does not scale in a cloud or a resource constrained environment.
Entitlement model is where in the access control system does not perform enforcement or access checks. Rather it answers questions such as "What permissions does this user have?". The question seeker will then use the returned answer to perform local enforcement.
|Cloud Enforcement vs Entitlement Model|
May 20, 2013
It has been a few weeks since I last blogged and it's definitely time I get back into it. Since the beginning of February we (a) launched a major upgrade to Centrify Suite for UNIX/Linux/Mac, (b) entered the Windows privilege management market with DirectAuthorize for Windows; (c) are now fully participating (and doing quite well out of the gates) in the cloud identity management market with Centrify for SaaS; and (d) launched a major partnership with Samsung. And the nice thing is that this product and technology momentum is also being replicated in other areas of our business.
This promises to be a good comments thread.
can you come up with some examples of sentences that would be incomprehensible (without explanation) to a denizen of 2003 that don't revolve around ephemeral tech or pop culture churn? And can you provide and deconstruct some sentences from 2023 that, if we had sufficient foresight, we ought to be able to understand and interpolate a context for?
My fav so far. "Skype trojan forces Bitcoin mining, security firm warns"
The language of alienation - Charlie's Diary »
Some examples, culled from reddit, to get you started: hang2er: "I can't get a 4G signal here, I'll skype you on my droid as soon as I hit a hotspot, I need a coffee anyway." Retinence: "The headline, 'Galaxy Nexus: Android Ice Cream Sandwich guinea pig.'" (But tech is easy ...) ...
[from: Google+ Posts
May 19, 2013
Chaipuccino is not a thing, no matter what Starbucks may say. If you run a cafe and you have Chai tea bags as well as the usual English Breakfast, then congratulations. But putting hot frothed milk in a fancy tea pot, adding a chai tea bag and serving it with a fancy cup is just plain wrong. Please just treat it like Workman's Tea. A mug, tea bag, boiling water and a splash of milk once its brewed a bit is fine.
And Starbucks, no thanks for the Chai Tea Latte. Maybe some people like it, but I reckon that's just wrong as well.
[from: Google+ Posts
May 18, 2013
One of the first steps taken to protect a system from authentication errors is the determination of its assurance level requirement. That risk assessment process takes as input potential harm and likelihood of harm. This blog post looks at the applicability of the likelihood factor when assessing assurance level requirements for Internet connected systems.
The classic "E-Authentication Guidance for Federal Agencies (OMB-M04-04) [PDF]" defines risk from authentication error as a function of two factors: (a) potential harm or impact and (b) the likelihood of such harm or impact. The categories of harm and impact and how to apply them, per OMB-04-04, can be found in my earlier blog post on HOW-TO Conduct a Risk Assessment to Determine Acceptable Credentials.
The key point to note is that most risk assessment methodologies allow for “tuning” the risk using a “likelihood of harm/impact” factor, which looks something like this:
Risk of Authentication Error = Potential Impact/Harm * Likelihood of Impact/Harm
But how does one determine the "likelihood of harm" number? The two classic approaches are to explore "base rates" or to consult with experts. But there is a gotcha with experts:
The simplest and most intuitive advice we can offer [...] is that when you’re trying to gather good information and reality-test your ideas, go talk to an expert. Here’s what is less intuitive: Be careful what you ask them. Experts are pretty bad at predictions. But they are great at assessing base rates.
Decisive: How to Make Better Choices in Life and Work
So a prediction by an expert may not be all that valuable. But what about the base rates? My concern there is the constantly evolving threat environment that is the Internet, and how base rates that are based on past data are an unreliable predictor of the future.
So my recommendation in this particular case is rather simple. In this type of evaluation set the "likelihood" factor equal to 1. DO NOT discount the likelihood of harm, and ALWAYS assume there is a likelihood of harm:
Risk of Authentication Error = Potential Impact/Harm * 1
What that means is that, if as part of your assurance assessment you need to factor in the impact or harm from an alien invasion, do not discount the likelihood! Stand firm, fully account for it, and put into place compensating controls to mitigate the consequences.
These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer
Something to get lost in. http://electronicexplorations.org/?show=zhou
Fairly short and quirky mix of tunes "that I would want to listen to". Recommended.
"I chose to focus on the less dance floor orientated sounds for this mix and instead tried to compile a selection of tunes that I would want to listen to. It is a mix highlighting some of the music currently coming out of Bristol that I find most exciting as well as tracks that have informed the music we make ...
[from: Google+ Posts
Today is National Buttermilk Biscuit Day. Biscuits fill me with joy, as do community integrations, so here's a post packed with deliciousness from the amazing people in the Stormpath community. (First, here's an awesome biscuit recipe. Happy Biscuit Day!)
- CAS-Addons, now with Richer Stormpath Support
- Python Login Skeleton for Stormpath
CAS-Addons, now with Richer Stormpath Support
, the team at Unicon released CAS 3.5 Integration with Stormpath
, which allows Stormpath
to be used as a primary authentication source for CAS servers. They just added the ability to source Stormpath attributes and expose them as regular CAS Principal attributes. To quote Dmitriy at Unicon, "No need for a complex IPersonDirectoryDao impl, etc. Just a rich StormpathPrincipal encapsulating Account instances."
He also added custom XML namespace support for Stormpath-related beans. The authentication manager element now contains all the Stormpath-related objects. For example, to define a top-level authentication manager containing Stormpath handler and attributes resolution, one would simply need to do this:
- Top level AuthenticationManager bean definition
- List of handlers with default HttpBased handler and StormpathAuthenticationHandler
- List of principal resolvers with default HTTP principal resolver and StormpathPrincipalResolver (which automatically exposes Stormpath Account data as CAS Principal attributes)
...and eliminates any boilerplate bean definition constructs.
Python Login Skeleton for Stormpath
Brian Peterson just released a simple and very intuitive login skeleton for Stormpath that uses the Stormpath Python SDK. This makes it really (I mean, really) easy for Pythonistas to use and understand Stormpath.
He also did a great job of explaining and diagramming the actions of the SDK. Fork it, play with it, send him (and us!) your suggestions and pull requests. As we roll out the Python SDK update, which will include 2.7 support as well as a simplifying refactor, we'll also be updating this handy tool. Nice work!
May 17, 2013
Shock horror. Festivals are expensive and only middle aged, middle class people can afford it.
Which explains how white, middle aged and middle class, Glastonbury can appear to be. (sez, the balding old git).
[from: Google+ Posts
Access Risk Management Blog | Courion
Securing an enterprise is no mean feat and is made more difficult by the rapidly expanding use of software in the Cloud. Although security is often cited as a concern with a move to the Cloud, what may not be fully appreciated is how cloud computing amplifies the existing risks of how to best manage millions, if not billions of identity and access relationships.
Check out this article by Kurt Johnson, Courion VP of Strategy and Corporate Development, to learn about the need for real-time access intelligence to manage the risk of improper access to systems and resources that span the enterprise and the Cloud, as well as how organizations can reduce risks before they become bona fide breaches.
Click here to read the full story.
Students from a range of educational institutions now have the ability to confirm, through WAYF, their student status with Mecenat, thereby obtaining access to purchasing discounted items from Mecenat's business partners. Educational institutions with an interest can get further information from Lasse Urth of Mecenat (phone +45 2851 2171).
People employed at institutions using e-recruitment solutions from peopleXS now have the ability to log into the peopleXS online service using their institutional login, through WAYF. In case of interest, contact peopleXS for further information.
I am not happy with the FIDO Alliance and their FAQ
do not eliminate my concerns.
The major concern beeing: "Why isn't this going straight to a standards body?"
The FIDO authentication protocol needs to be part of a standardized,
interoperable ecosystem to be successful. Building this ecosystem
requires the active commitment of everybody from hardware chipset
vendors, to the manufacturers of back-end server systems. Coordination
across the divergent interests of these players is a complex affair, and
one that current technical standards bodies are not well suited to
The FIDO Alliance will refine the protocol, and monitor the
extensions required to meet market needs and to make the protocol robust
and mature. Implementation will not be undertaken by the FIDO
Alliance. The mature protocol will be presented to the IETF, W3C or
similar body after which it will be open to all industry players to
This is what standardization bodies working groups are for. Work on protocols and formats. Work on security considerations. Use the experience of "the community".
So FIDO is developing a protocol and will then present it to one standardization body...
Meanwhile it is a closed thing and it costs relevant amounts of money
to join the alliance.
This neither free nor open.
During IIW there were several sessions on FIDO (1
). Each full of good intentions and marketing speek but no substance. No real information. You have to join the alliance to get that. Well, ...
Somebody at Nok Nok Labs
convinced somebody at Paypal to hire them and found FIDO. Why Google joined despite Google's support for the W3C WebCrypto
group I have no idea.
The W3C WebCrypto group is were this belongs. This might need rechartering
of the group. But that is doable. Especially if the proposal is backed by a prototype implementation. Especially if it is backed by by Paypal, Lenovo, Google, Nxp and others
I believe that we need better authentication methods beyond username and password. I think that bring your own (hardware) identiy might work to that goal. I believe that mobile phones, and SIM cards and NFC help to achieve this. I believe that the mobile wallet is the right user interface to choose your identity.
I believe that doing it in a closed group is not the right way.
May 16, 2013
The 2013 edition of the European Identity & Cloud Conference just finished. As always KuppingerCole Analysts has created a great industry conference and I am glad I was part of it this year. To relive the conference you can search for the tag #EIC13 on Twitter.
KuppingerCole manages each time to get all the Identity thought leaders together which makes the conference so valuable. You know you’ll be participating in some of the best conversations on Identity and Cloud related topics when people like Dave Kearns, Doc Searls, Paul Madsen, Kim Cameron, Craig Burton … are present. It’s a clear sign that KuppingerCole has grown into the international source for Identity related topics if you know that some of these thought leaders are employed by KuppingerCole themselves.
Throughout the conference a few topics kept popping up making them the ‘hot topics’ of 2013. These topics represent what you should keep in mind when dealing with Identity in the coming years:
XACML and SAML are ‘too complicated’
It seems that after the announced death of XACML everyone felt liberated and dared to talk. Many people find XAMCL too complicated. Soon SAML joined the club of ‘too complicated’. The source of the complexity was identified as XML, SOAP and satellite standards like WS-Security.
There is a reason protocols like OAuth, which stays far away from XML and family, have so rapidly gained so much followers. REST and JSON have become ‘sine qua none’ for Internet standards.
There is an ongoing effort for a REST/JSON profile for XACML. It’s not finished, let alone adopted, so we will have to wait and see what it gives.
That reminds me of a quote from Craig Burton during the conference:
Once a developer is bitten by the bug of simplicity, it’s hard to stop him.
It sheds some light on the (huge) success of OAuth and other Web 2.0 API’s. It also looks like a developer cannot be easily bitten by the bug of complexity. Developers must see serious rewards before they are willing to jump into complexity.
OAuth 2.0 has become the de-facto standard
Everyone declared OAuth 2.0, and it’s cousin OpenID Connect, to be the de facto Internet standard for federated authentication.
Why? Because it’s simple, even a mediocre developer who hasn’t seen anything but bad PHP is capable of using it. Try to achieve that with SAML. Of course, that doesn’t mean it’s not without problems. OAuth uses Bearer tokens that are not well understood by everyone which leads to some often seen security issues in the use of OAuth. On the other hand, given the complexity of SAML, do we really think everyone would use it as it should be used, avoiding security issues? Yes, indeed …
A lot of talk about the ‘API Economy’. There are literally thousands and thousands of publicly available APIs (named “Open APIs”) and magnitudes more of hidden APIs (named “Dark APIs”) on the web. It has become so big and pervasive that it has become an ecosystem of its own.
New products and cloud services are being created around this phenomena. It’s not just about exposing a REST/JSON interface to your date. You need a whole infrastructure: throttling services, authentication, authorization, perhaps even an app store.
It’s also clear that developers once more become an important group. There is nu use to an Open API if nobody can or is willing to use it. Companies that depend on the use of their Open API suddenly see a whole new type of customer: developers. Having a good Developer API Portal is a key success factor.
Context for AuthN and AuthZ
Manye keynote and presentations referred to the need for authn and authz to become ‘contextual’. It was not entirely sure what was meant with that, nobody could give a clear picture. No idea what kind of technology or new standards it will require. But it was all agreed this was what we should be going to
Obviously, the more information we can take into account when performing authn or authz, the better the result will be. Authz decisions that take present and past into account and not just whatever is directly related to the request, can produce a much more precise answer. In theory that is …
The problem with this is that computers are notoriously bad at anything that is not rule based. Once you move up the chain and starting including the context, next the past (heuristics) and ending at principles, computers are giving up pretty fast.
Of course, nothing keeps you from defining more rules that take contextual factors into account. But I would hardly call that ‘contextual’ authz. That’s just plain RuBAC with more PIPs available. It only becomes interesting if the authz engine is smart in itself and can decide, without hard wiring the logic in rules, which elements of the context are relevant and which aren’t. But as I said, computers are absolutely not good at that. They’ll look at us in despair and beg for rules, rules they can easily execute, millions at a time if needed.
The last day there was a presentation on RiskBAC or Risk Based Access Control. This is situated in the same domain of contextual authz. It’s something that would solve a lot but I would be surprised to see it anytime soon.
Don’t forget, the first thing computers do with anything we throw at them, is turning it into numbers. Numbers they can add and compare. So risks will be turned into numbers using rules we gave to computers and we all know what happens if we, humans, forgot to include a rule.
Graph Stores for identities
People got all excited by Graph Stores for identity management. Spurred by the interest in NoSQL and Windows Azure Active Directory Graph, people saw it as a much better way to store identities.
I can only applaud the refocus on relations when dealing with identity. It’s what I have been saying for almost 10 years now: Identities are the manifestations of relationship between two parties. I had some interesting conversations with people at the conference about this and it gave me some new ideas. I plan to pour some of those into a couple of blog articles. Keep on eye on this site.
The graph stores themselves are a rather new topic for me so I can’t give more details or opinions. I suggest you hop over to that Windows Azure URL and give it a read. Don’t forget that ForgeRock already had a REST/JSON API on top of their directory and IDM components.
Life Management Platforms
Finally there was an entire separate track on Life Management Platforms. It took me a while to understanding what it was all about. Once I found out it was related to the VRM project of Doc Searls, it became more clear.
Since this recap is almost getting longer than the actual conference, I’ll hand the stage to Martin Kuppinger and let him explain Life Management Platforms.
That was the 2013 edition of the European Identity & Cloud Conference for me. It was a great time and even though I haven’t even gotten home yet, I already intend to be there as well next year.
This morning, I was read a recent Oracle White Paper entitled, “Transforming Customer Experience: The Convergence of Social, Mobile and Business Process Management.” It gave interesting perspective on the blending of emerging paradigms – mobile and social – with the older discipline of Business Process Management.
To stay ahead in today’s rapidly changing business environment, organizations need agile business processes that allow them to adapt quickly to evolving markets, customer needs, policies, regulations, and business models. … Social and mobile business models have already contributed important new frameworks for collaboration and information sharing in the enterprise. While these technologies are still in a nascent state, BPM and service oriented architecture (SOA) solutions are well established, providing a history of clear and complementary benefits.
The key is effectively leveraging the strengths of existing, proven architectures while taking advantage of new opportunities:
The term “Social BPM” is sometimes used to describe the use of social tools and techniques in business process improvement efforts. Social BPM helps eliminate barriers between decision makers and the people affected by their decisions. These tools facilitate communication that companies can leverage to improve business processes. Social BPM enables collaboration in the context of BPM and adds the richness of modern social communication tools.
… Social BPM increases business value by extracting information from enterprise systems and using it within social networks. Meanwhile, social technologies permit employees to utilize feedback from social networks to improve business processes.
I found one use case presented in the paper to be particularly instructive. As illustrated in the following diagram,
A claims management system assigns a task to an individual claims worker with the expectation that the user will complete the task to advance the process. Of course, to accomplish this type of knowledge-based task, the individual must often engage other people within the business .
However, Social BPM enables the use of social networking tools to extend collaboration beyond the traditional enterprise boundaries, as shown in the following diagram:
Not only can internal knowledge workers use social networking tools to find each other and share information, but also customers can interact with the process at specific steps, using mobile devices, to supply their own information into a business process. For example, a customer involved in an auto accident might upload photos taken with a cell phone into the process via a claims management app provided by the insurance company.
In order to make this all work, participants will need to use both enterprise and social identity credentials. Because they are using mobile devices, the IAM system must accommodate mobile, social and cloud infrastructures in order to effectively use information. This is very much in line with the principles set forth in the Gartner Nexus I addressed yesterday.
Realists have no idea how they ended up living on this once hospitable planet with all these fools
Chinese Demand, Peak Oil And Realism - Decline of the Empire »
This is the third and final day of my spring fundraiser. If you value this website, consider making a donation via the Donate (Paypal) button on this page, or by sending a check or money order to the PO Box I gave you in Tueday's post. Thanks â€” Dave [Tony Judt's book Ill Fares the Land] has a touch of prophecy in the authentic sense of that term. Prophecy is not about foretelling the future; it is about warning those in the present that unless th...
[from: Google+ Posts
16 May (tonight), MS Stubnitz, Canary Wharf, London for some Real time, Algorithmically Generated Techno wholly or predominantly characterised by the emission of a succession of repetitive conditionals.
Time to dust off the Music Tech dissertation and rhythm generator using Markoff Chains in the time domain.
London (MS Stubnitz) Algorave on 16th May 2013 »
When: 7pm-11:30pm, Thursday 16 May 2013 Where: MS Stubnitz, Montgomery Street, Canary Wharf tube, London E14 9SB Tax: £9 advance tickets (or plenty on the door for £10) We're back on-board the MS S...
[from: Google+ Posts
May 15, 2013
I’m pleased to report that OAuth 2.0 has won the 2013 European Identity Award for Best Innovation/New Standard. I was honored to accept the award from Kuppinger Cole at the 2013 European Identity and Cloud Conference on behalf of all who contributed to creating the OAuth 2.0 standards [RFC 6749, RFC 6750] and who are building solutions with them.
Today I read a year-old document published by Gartner, entitled, “The Nexus of Forces: Social, Mobile, Cloud and Information.” It explains the interaction among these market forces better than any single document I have read:
Research over the past several years has identified the independent evolution of four powerful forces: social, mobile, cloud and information. As a result of consumerization and the ubiquity of connected smart devices, people’s behavior has caused a convergence of these forces.
In the Nexus of Forces, information is the context for delivering enhanced social and mobile experiences. Mobile devices are a platform for effective social networking and new ways of work. Social links people to their work and each other in new and unexpected ways. Cloud enables delivery of information and functionality to users and systems. The forces of the Nexus are intertwined to create a user-driven ecosystem of modern computing. (my emphasis added)
Excerpts from Gartner’s treatment of each of these areas include:
Social is one of the most compelling examples of how consumerization drives enterprise IT practices. It’s hard to think of an activity that is more personal than sharing comments, links and recommendations with friends. Nonetheless, enterprises were quick to see the potential benefits. Comments and recommendations don’t have to be among friends about last night’s game or which shoes to buy; they can also be among colleagues about progress of a project or which supplier provides good value. Consumer vendors were even quicker to see the influence — for good or ill — of friends sharing recommendations on what to buy.
Mobile computing is forcing the biggest change to the way people live since the automobile. And like the automotive revolution, there are many secondary impacts. It changes where people can work. It changes how they spend their day. Mass adoption forces new infrastructure. It spawns new businesses. And it threatens the status quo.
Cloud computing represents the glue for all the forces of the Nexus. It is the model for delivery of whatever computing resources are needed and for activities that grow out of such delivery. Without cloud computing, social interactions would have no place to happen at scale, mobile access would fail to be able to connect to a wide variety of data and functions, and information would be still stuck inside internal systems.
Developing a discipline of innovation through information enables organizations to respond to environmental, customer, employee or product changes as they occur. It will enable companies to leap ahead of their competition in operational or business performance.
Gartner’s conclusion offers this challenge:
The combination of pervasive mobility, near-ubiquitous connectivity, industrial compute services, and information access decreases the gap between idea and action. To take advantage of the Nexus of Forces and respond effectively, organizations must face the challenges of modernizing their systems, skills and mind-sets. Organizations that ignore the Nexus of Forces will be displaced by those that can move into the opportunity space more quickly — and the pace is accelerating.
So, what does this mean for Identity and Access Management? Just a few thoughts:
- While “Social Identity” and “Enterprise Identity” are often now considered separately, I expect that there will be a convergence, or at least a close interoperation of, the two areas. The boundaries between work and personal life are being eroded, with work becoming more of an activity and less of a place. The challenge of enabling and protecting the convergence of social and enterprise identities has huge security and privacy implications.
- We cannot just focus on solving the IAM challenges of premised-based systems. IAM strategies must accommodate cloud-based and premise-based systems as an integrated whole. Addressing one without the other ignores the reality of the modern information landscape.
- Mobile devices, not desktop systems, comprise the new majority of user information tools. IAM systems must address the fact that a person may have multiple devices and provide uniform means for addressing things like authentication, authorization, entitlement provisioning, etc. for use across a wide variety of devices.
- We must improve our abilities to leverage the use of the huge amounts of information generated by mobile/social/cloud platforms, while protecting the privacy of users and the intellectual property rights of enterprises.
- Emerging new computing paradigms designed to accommodate these converging forces, such as personal clouds, will require built-in, scalable, secure IAM infrastructure.
- The Gartner Nexus doesn’t explicitly address the emergence of the Internet of Things, but IoT fits well within this overall structure. The scope of IAM must expand to not only address the rapid growth of mobile computing devices, but the bigger virtual explosion of connected devices.
We live in an interesting time. The pace of technological and social change is accelerating. Wrestling with and resolving IAM challenges across this rapidly changing landscape is critical to efforts to not only cope with but leverage new opportunities caused by these transformative forces.
Today, services like authorization and authentication are delivered via APIs: JSON / REST HTTP “endpoints.” Some of the most popular authentication API’s on the Internet are using different profiles of OAuth2. Because consolidation increases efficiency, Google, Microsoft, Yahoo, and others came together to define one standard profile for OAuth 2.0 authentication: OpenID Connect.
OpenID Connect documents a single profile of OAuth2 that can be used by any Internet domain. One standard for domain authentication will simplify security for application developers (web and mobile), make end users more secure, and enable easier integration of mobile devices and cloud agents.
See Toshiba Cloud TV in Action.
Specifically, OpenID Connect defines several endpoints to enable domains to offer : (1) user authentication; (2) client registration; (3) client authentication; (4) user claims; (5) client claims; and (6) discovery. Industry analysts are predicting that OpenID Connect is on a trajectory for significant adoption. The standard should be finalized by the end of 2013. Nat Sakimura (NTT) , Vice-Chairman of the OpenID Foundation, has said this about OpenID Connect: “we are done apart from formalities.”
For reasons like these, Toshiba decided in 2012 to align with OpenID Connect. As Gluu’s open source “OX” platform performed well in the OpenID Connect OpenID Provider (“OP”) Internop, Toshiba decided it was preferable to use OX rather than write their own implementation.
Learn more about OpenID Connect via slides from Microsoft’s Michael B. Jones.
The partnership with Toshiba has driven the implementation of a number of features to the OX platform. For example, they wanted to build a highly available “cluster” of authentication servers delivered across multiple geographic regions to ensure business continuity. This would enable Toshiba engineers to take a server out for maintenance, and just add it back later.
Toshiba has also been helpful with testing and benchmarking. OX has been in production there since last year, so we have also been able to observe the behavior of the platform over time, while handling significant load.
Gluu has also built features to enable Toshiba to use the central publication of multi-party federation metadata to enable globally delivered websites to trust identity providers in different regions (Japan, US, and Europe) without persisting any personally identifiable data outside of the region. Although JSON multiparty federation metadata is not currently a feature of OpenID Connect, Gluu has documented its implementation at the OpenID Foundation in the Emerging Work Section, and hopes it will be included in a subsequent release: http://wiki.openid.net/w/page/59727624/Multi-Party%20Federations
Toshiba is keen to promote the OX open source platform within the SmartTV Alliance, which is why they authorized the May 1, 2013 press release. Adoption of the OX open source platform will help members of the SmartTV Alliance collaborate on the development of an Internet scale, interoperable security infrastructure, a goal everyone wants to achieve.
Gluu provides services to companies that want to use the OX platform: Design, Build, Operate, and Transfer (DBOT). We were able to help Toshiba engineers jumpstart their development effort and to provide some tactical feature enhancements in the open source project to support their rollout.
European Identity Award 2013 for „Best Innovation/New Standard in Information Security”: A new standard that rapidly gained momentum and plays a central role for future concepts of Identity Federation and Cloud Security.more
Special Award 2013 for „Bridging the organizational gap between Business and IT”: A project that was far above average when it comes to Business/IT Alignment, by successfully setting up a framework of guidelines and policies plus the required organizational entities and rolling this out into a global organization.more
European Identity Award 2013 in category „Best Access Governance and Intelligence Project”: Holistic IAM/IAG approach following new architectural concepts and enabling Dynamic Authorization Management based on business rules.more
Special Award 2013 for „Rapid Re-Design and Re-Implementation of the Entire IAM”: Moving from a traditional, Active Directory-centric environment to full HR integration on a global scale and full support for automated provisioning, based on a clearly defined roadmap for further improvement.more
European Identity Award 2013 in category „Best Access Governance and Intelligence Project”: Implementing cross-divisional SoD rules on a global scale at business level, with full integration into the existing Access Governance solution.more
Am heutigen Abend verlieh die Analystengruppe KuppingerCole im Rahmen der siebten European Identity & Cloud Conference (EIC) in unterschiedlichen Kategorien den European Identity & Cloud Award 2013. Dieser Award zeichnet herausragende Projekte und Initiativen in den Bereichen Identity & Access Management (IAM), GRC (Governance, Risk Management and Compliance) und Cloud Security aus. Nominiert waren zahlreiche Projekte, die im Laufe der letzten 12 Monate von Anwenderunternehmen und Herstellern...more
Google All Access. "Radio Without Rules", Music streaming $9.99 pm, $7.99 pm early adopters. 30 days free. An extension to Google Music allowing clever playlists and instant access to any track in Google's library. Along with some more smarts for exploring based on your listening and library habits. USA Today. Other countries rolling out "soon".
No thanks. I've already got 30k tracks in my personal collection.
[from: Google+ Posts
May 14, 2013
Social media is fast becoming the identity mechanism of choice to log into popular sites and company information. Looking to find the right music on Spotify? Want to connect with the world’s professionals on LinkedIn? You can now simply log in via your Facebook account. The UK Government may even soon allow citizens to use their social media identity to access public services safely and securely...
On May 7, Andras Cser of Forrester Research, Inc. posted a thought-provoking blog entry entitled “XACML is Dead” which postulated that there wasn’t any future for XACML.
At CA Technologies we have long supported a broad range of industry standards such as LDAP, X.509, WS-Federation, SAML, WS-Security, REST, SPML as well as more recent standards like OpenID, OpenID Connect and OAuth, thereby...
Summer is coming, which means the hurricane, tornado season is here. Do you have a contingency plan for your critical IT infrastructure? If so, is…