July 25, 2014

Julian BondApocaloptimism. Try to imagine a desirable future in 2114 (100 years) that you would want to live in... [Technorati links]

July 25, 2014 06:13 AM
Apocaloptimism. Try to imagine a desirable future in 2114 (100 years) that you would want to live in. Because we need some more hopeful narratives to counter all the dystopianism.

https://medium.com/message/a-desirable-future-haiku-ff01d63c93c6

"Population 4 billion; 85% urban. Climate change adapted" Getting from here to there would be, ahem, interesting.

Quite a lot of Californians in there hoping for a future that's all "graphite and glitter" in some http://en.wikipedia.org/wiki/Gernsback_Continuum

Of course the 100 year future will be messier than that and more like today than like Startrek. But what about the 1000 and 10,000 year futures? ;) Try and imagine an Earth economy in 10,000 years that can support 5B people. 100% recycling but no Helium. And of course the 10 year future in 2024 that you would want to live in.
 A Desirable-Future Haiku »
The coming hundred years, in one hundred words

[from: Google+ Posts]

Vittorio Bertocci - MicrosoftProtecting an ASP.NET WebForms App with OpenId Connect and Azure AD [Technorati links]

July 25, 2014 06:12 AM

All of our official .NET samples that show some web UX are based on MVC. This caused somebody to speculate that the new OWIN components for OpenId Connect and WS-Federation require MVC to function. Nothing farther from the true! You can totally use those to secure your WebForms apps. Here there’s a super quick tutorial on how to do it. It is super easy. 98.8% of the tutorial is exactly the same of the corresponding MVC based tutorial, but for the sake of de-normalization I am going to go through those steps by value instead of by reference, on account of the possibility that some of you guys might not have had any previous exposure to this given the samples’ MVC bias.

Create an empty project

Fire up the good ol’ VS2013, and head to new project->ASP.NET Web application. On the project template dialog, pick Web Forms. Hit OK.

image

Visual Studio will create your project according to the template you picked. Before moving any further, let’s enable SSL.

Provision the app in Azure AD

Let’s leave VS for few moments and pay a visit to the Azure portal, where we will tell to our Azure AD tenant about our newly minted application.

Navigate to https://manage.windowsazure.com/, sign in as your tenant admin, scroll to the Active Directory tab, choose the tenant you want to use, select the Applications tab, and click the Add button on the appbar at the bottom of the screen.

Choose “Add an application my organization is developing”.

Give to the app any name you like. Keep the default “web application and/or web api”. Click the Next arrow.

In the Sign-On URL enter the HTTPS address you got when you enabled SSL on the project (mine is https://localhost:44307/). In the App ID URI enter any valid URI that will later remind you of what this app is. For my test app I chose http://wifeistravellinghenceIblogoutofboredom. Click the Done button.

Click on the Configure tab and leave the browser open there. We’re going to need some of the values here in just a moment.

Add references to the Cookie/OpenId Connect/SystemWeb NuGets

Next, let’s go back to Visual Studio. Go to Tools->Library Package Manager->Package Manager Console. In the console, enter the following three magic commands:

Install-Package Microsoft.Owin.Security.OpenIdConnect -Pre

Install-Package Microsoft.Owin.Security.Cookies –Pre

Install-Package Microsoft.Owin.Host.SystemWeb –Pre

Those will bring down the Katana components you need.

Add the initialization logic

We are in good shape! Given that we started from the Individual Auth template, the OWIN pipeline is already present. We just need to change it to use OpenId Connect. If for some reason (e.g ADFS) you want to use WS-Federation, the mechanism is *exactly* the same, you just use the appropriate middleware.

using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Microsoft.Owin.Security.OpenIdConnect;
using Owin;

 

app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);

app.UseCookieAuthentication(new CookieAuthenticationOptions());

app.UseOpenIdConnectAuthentication(
    new OpenIdConnectAuthenticationOptions
    {
        ClientId = "d04fb01f-0715-4ed7-a656-0793b545e1f1",
        Authority = "https://login.windows.net/6c3d51dd-f0e5-4959-b4ea-a80c4e36fe5e"
    });

That will change the pipeline to use OpenId Connect. The values you see there are associated to my test app: you will have to change those with the coordinates of your own app, in your own tenant. Namely:

Done.

Give it a spin!

Hit F5. Your page will come up. Click on the Log In link on the top right corner, which comes directly from the template bits.

image

On the right hand side, you’ll notice the OpenIdConnect button. Hit it.

image

You’ll see the familiar AAD authentication UX. Enter your test user.

image

And voila’! The user is signed in. Q.E.D.

Extra Credit

The sequence below does not leave the project in the cleanest possible state – my goal was to show you in the smallest number of steps that the OpenId Connect (and WSFederation) middleware does work with WebForms.

In a more realistic setup, you would likely start from a template with the “no authentication” option. That would leave you with the responsibility of adding Startup.CS, but that’s really boilerplate code. Also, you would likely want to add some automatic authentication trigger. That is easy enough to achieve. For example, consider the following implementation of Default.aspx.cs:

using Microsoft.Owin.Security;
using Microsoft.Owin.Security.OpenIdConnect;
using System;
using System.Web;
using System.Web.UI;

namespace OWINandWebForms
{
    public partial class _Default : Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            if (!Request.IsAuthenticated)
            {
                HttpContext.Current.GetOwinContext().Authentication.Challenge(
                    new AuthenticationProperties { RedirectUri = "/" }, 
                    OpenIdConnectAuthenticationDefaults.AuthenticationType);
            }
        }
    }
}

That simply triggers a sign in in case the caller is not authenticated. Not hard at all Smile

 

 

Well, there you have it. OpenId Connect, Azure AD and WebForms.
We chose OWIN as the platform for our new wave of identity libraries because of its flexibility – don’t let the fact that we standardized on MVC for our samples stop you from enjoying the latest and greatest Smile

July 24, 2014

Kantara InitiativeIRM Work Group is Live! [Technorati links]

July 24, 2014 09:44 PM

Dear Community,

With great pleasure, I announce the formation of the Identity Relationship Management Work Group (IRM WG)! If you’re interest in evolving or are practicing IRM – this is your innovation space.

The wiki space (work in progress) is here.
The sign up form is here. *

* Note and shameless plug: You don’t need to be a Kantara Member to join the group but we hope you’ll consider it! As a non-profit, every Membership counts.

Not sure what IRM is?  No problem.  The original post is here. IRM (roughly) provides a business focused language to describe the current and future state of Identity Services, Protocols and Standards.  IRM is about the evolution of Identity and Access Management from the IT and GRC team to the Business Development team.  IRM captures the sense that IdM / IAM is about more than solving a help desk issue – it’s about growing connecting for businesses, communities, and governments.

IRM has already resonated with companies on an international scale including the founders: Experian, ForgeRock, and Salesforce.com.  Once the pillars were published innovators like Radiant Logic and Avoco Identity were quick to tell their “IRM Story”.  CA has also noted the language strongly resonates with their mission. IRM is about the evolution and value of Identity and Relationships and how business, governments, consumers, and citizens can use that value to power their market. IRM considers the use of on-line context today and starts to draw the path to provision of user access and control for respect for privacy (resource sharing) management (See User Managed Access – UMA).

In June, I gave a presentation at the IRM Summit to describe the IRM revolution.  You can check the video to get a quick idea of the topic (yes, there might be rainbow unicorns somewhere in the deck).

At the same event, Ian Glazer has already started the ball rolling with his intro to “the Laws of Relationship Management.” Ian has started the community discussion around IRM and what does it look like – perhaps technically and policy wise. Check out Ian’s presentation.

If you’re interested in discussing the business language of Identity, while evolving and innovating use cases, technology and policy, IRM is the group for you.  If you’d like to share your IRM Story, IRM is the group for you.

We look forward to evolving IRM and revolutionizing Identity with you!

Joni Brennan
Executive Director, Kantara Initiative

 

 

 

Kuppinger ColeExecutive View: BalaBit Shell Control Box - 71123 [Technorati links]

July 24, 2014 05:16 PM
In KuppingerCole

BalaBit IT Security was founded in 2000 in Hungary, and their first product was an application layer firewall suite called Zorp. Since that time, BalaBit has grown into an international holding headquartered in Luxembourg with sales offices in several European countries, the United States and Russia and a large partnet network. The company has won widespread recognition in the Open Source community by making their core products available as free...
more

Ian GlazerDo we have a round wheel yet? Part 2 of my musings on identity standards [Technorati links]

July 24, 2014 04:26 PM

Yesterday I talked about the state of identity standards with regards to authentication and authorization. Today I’ll cover attributes, user provisioning, and where we ought to go as an industry.

Attributes

The wheel of attributes is roundish. There are two parts to the attribute story: access and representation. We can access attributes… sorta. There’s no clear winner that is optimized for the modern web. We’ve got graph APIs, ADAP, and UserInfo Endpoints – not to mention proprietary APIs as well. Notice I added the constraint of “optimized for the modern web.” If remove that constraint, then we could say that access to attributes is a fully solved problem: LDAP. But we are going to need a protocol that enables workers in the modern web to access attributes… and LDAP ain’t it.

As for a standardized representation, we have one. Name-value pairs. In fact, name-value pairs might be the new comma. And although NVP are ubiquitous, we don’t have a standard schema. What is the inetOrgPerson of a new generation? There is no inetOrgPerson for millennial developers to use. But does that even matter? We could take SCIM’s schema and decree it to be the standard. But we all know, that each of us would extend the hell out of it. Yes we started with a standard schema, but every service provider’s schema is nearly unique.

 User Provisioning

User provisioning is nearly round. Let’s face it the wheel that SPML v2 built was not round. The example that the standard provided wasn’t even valid XML – not an auspicious start. In fact, SPML was a step away from roundness when we think about DSML v2. DSML v2 was a round wheel. It wouldn’t be very useful to day but it would roll.

So what about SCIM? I’m bullish on it. Some really smart people worked on it, including my boss. We (saleforce.com) are supporting it. Others such as Cisco, Oracle, SailPoint, Technology Nexus, and others are supporting. We hope you support it too. In fact, hopefully, at the end of this week it might just get a final version of the 2.0 draft at the IETF meeting in Toronto. SCIM definitely needs more miles on the road, but I believe that the use cases that have been used to form SCIM are fairly representative of a majority of use cases we have. It can’t do everything but better believe it can do something.

And this narrow focus is important as we think about the work we must do. As we as an industry shift from just dealing with employee identities to those of customers, citizens, and things, there is shift from heavy rich user provisioning to lighter weight registration and profile management. SCIM is just as applicable in an employee identity scenario as it is in a customer identity scenario. And thus is well positioned to make the transition.

More than just wheels

How do you discover identity services of from a service provider? I don’t mean in a specific ODIC way, but in a more general way. How do you know if they use SAML, SCIM, a proprietary attribute API, FIDO U2F, etc?

Is there a way to kickstart point-to-point identity relationships without paying the cost of point-to-point drudgery? Could I point my identity system at yours, form a relationship between the organizations, and start to use our joint identity services to meaningfully interact?

Let me ask this a different way – do we have hubs and axels for our roundish wheels? Can we build something that removes the heavy lifting when offering and/or consuming identity services? I believe this is the uncharted standards territory into which we must blaze a trail.

Measuring our progress

As we continue to refine our standards, we need a way of evaluating the roundness of those wheels, so to speak. We need some set of design considerations to help use decided whether a standard will get us from here to there. A few weeks ago I debuted the laws of relationships.  They a set of considerations that we as identity professionals must be mindful of as we begin to navigate the waters of modern IAM – of identity relationship management. They can help evaluate the roundness of our standards… but only if you lend a hand. Kantara is creating an Identity Relationship Management working group to which I am giving these Laws of Relationships. I hope you will join me, Joni, Allan, and others in this new working group to help make identity ready for the modern era.

The challenge ahead

That modern era is one in which more people and more things are more closely related. It is an era that holds the promise of “identity as business enabler.” And in this modern era identity will not only deliver the right access to the right people at the right time but the right experience to the right people at the right time. Not just people but things too.

To be fair, this modern era will require us to haul a heavy load. To do that we need round wheels. We need workable identity standards. We have made great progress but we are not there yet.

I’ll ask 3 things of this audience and of our industry. First, adopt standards. If you aren’t using identity standards, you are inventing your own wheel. That is a strategy only optimized for the short-run. If the current ones don’t work for you, bring those use cases to standards bodies. If you don’t know where to go, ping people like Kelly, Eve, Justin, Nishant, or Patrick, and they’ll help you find the right place to go.

Second, help others adopt standards. Build SDKs to help people use OpenID and SAML. Support open source implementations of SCIM and OAuth. Start at home – with you organization’s developers and move out from there.

Third, demand standards. From your identity technology providers. Demand standards. From your business service providers. Demand standards. From your own development teams. Demand standards. If for no other reason than to kill off the need for password vaulting. Demand standards.

Lastly, keep in mind that a round wheel is not an end in and of itself. A great spec is potentially satisfying to the hard-core identity dorks in the room, me included, but that isn’t the real goal. We reinvent the wheel, we revisiting and rebuild our standards to get round ones – beautifully functioning ones that help carry the loads we must shoulder and us get to where we need to go in this era of modern identity.

Julian BondSex, drugs and rock 'n' roll are universal characteristics of the human condition. [Technorati links]

July 24, 2014 02:06 PM
Sex, drugs and rock 'n' roll are universal characteristics of the human condition.

http://guerillascience.org/book/

Were it not for our supposedly ‘base’ impulses, we never would have achieved many ground-breaking scientific discoveries. Hedonism has been integral to intellectual progress.
 Sex Drugs and Rock ‘n’ Roll – Guerilla Science »
Guerilla Science create events and installations for festivals, museums, galleries, and other cultural clients. We are committed to connecting people with science in new ways, and producing live experiences that entertain, inspire, challenge and amaze.

[from: Google+ Posts]

Nat Sakimura縦割りスパゲッティの情報基盤整理しようとしているが… [Technorati links]

July 24, 2014 11:22 AM

Twitter   jirok  縦割りスパゲッティの情報基盤整理しようとしてるのですが、「  ...

尊敬する國領先生にお題を頂いたので、ちょっと時間がかかりましたが、ブログにまとめてみました。

まずは結論から。

(1) 識別子 → Identity Register+RA

(2) 本人確認基盤 → IdP+CSP

(3) 属性 → IIA/IIP

(4) サービス → RP/SP

と読み替えるならば、このように分割して分別管理するのが良さそう、ということになります。

以下、その解説です。

IdMの基本は、当該ユーザの識別です。識別とは、その存在を母集団の中の他の存在から一意に区別するということです。わたしたちは、存在を直接的には観測できないので、これは、その存在に紐付いている属性の値の集合が一意になるまで集めるということに他なりません。この状態では、その属性の値の集合が「識別子」になっています。ただし、値はどんどん変わり得て、別の時点では識別性がなくなるかもしれないので、識別された時点でユニークかつ不変な文字列を振っておかないと管理上困ったことになります。この目的のためにふられる識別子のことを、ISO/IEC 24760-1では reference identifier、この識別子を生成する機能のことをreference identifier generatorといいます。

識別には属性の値を使っているので、その識別の信頼性は属性の値の信頼性に依存します。この属性の値の集合の信頼性を測ることをIdentity Proofing といいます。Identity = ある主体に関係する属性の集合と定義されているので、Identity Proofing は「ある主体に関係する属性の集合を確かめること」ということになります。日本でいう「本人確認」は、基本的にはこの特殊系~属性が基本4情報~であると考えて良いです。この属性の集合の値の信頼度をISO/IEC 29115では4つに分けることを提唱しています。

Identity管理(IdM)では、識別された主体のidentityを管理していきます。そのために、Identity Register と呼ばれるレジストリに登録します。この際、Identity Proofingを行ってIdentity Registerに登録する人のことをRegistration Authority (RA)といいます。

Identityはライフサイクルを持っています。ISO/IEC 24760-1では、ライフサイクルを不明、確立済み、有効、停止、保管の5つのフェーズに分けて管理することを提唱しています。

一旦こうして識別された主体がオンラインサービスを利用するには、自分を代理させるidentityをオンライン上に生成して使います。このidentityは、本人しか作ることができなくて、かつ他者から見た時に本人が制御しているということの確認ができなければなりません。本人しか作ることができないようにするために使われる情報のことを「クレデンシャル」と呼びます。これは、本人しか作ることができないものです。代表的なものにパスワードがあります。本人は、このクレデンシャルを「確認者(verifier)」とか「Credential Service Provider(CSP)」とか呼ばれる機構に提示して、今そこにいるのは本人であることを証明します。このことを認証(Authentication)といいます。この認証を経て作られた、他者から見た時に本人が制御しているということが確認できるidentityのことを認証済みidentity(Authenticated Identity)といいます。

この認証済みidentityの信頼性は、使われたクレデンシャルの信頼性に依存しています。そしてこの信頼性は、クレデンシャルの発行~交付~有効化~利用~停止~削除までを通じたライフサイクル管理がどれだけ信頼性高く行われているかによってきます。そして、出来上がった認証済みアイデンティティの信頼性は、それに含まれる属性(クレデンシャルを使って認証したという情報も属性の一つです)の信頼性ですから、Identity Proofing で確認された属性の信頼性と、クレデンシャルの信頼性の両方に依存するということになります。

一方、属性には、識別のために使ったもの以外にもたくさんあります。Identity registerも属性を保存していますが、その属性の範囲は、Identity Proofing に必要な範囲です。それ以外もそこに突っ込んでしまうという運用も多く見られますが、必ずしもそうである必要は無く、独立した属性プロバイダーを想定することができます。ISO/IEC 24760-1では、これのことをIdentity Information Provider (IIP)と呼んでいます。一般には、属性プロバイダー(Attribute Provider)という名称のほうが使われますね。IIP中で、Authoritativeな情報を出せるもののことを、Identity Information Authority(IIA)と呼びます。情報の鮮度・正確性の観点からは、情報は常にIIAからとったほうが良いことになります。ただし、こうするとどこにその情報を提出したのかがIIAに分かってしまうので、それを回避するためにわざと他のIIPを経由して取りに行くこともあります。この辺りは、プライバシーと情報の正確性のバランスで決めるところになります。なお、認証済みアイデンティティを作成して提供する機関もIIPの一種であることに注意してください。このようなIIPのことを、業界ではIdPと呼ぶことが多いです。

なお、Identity Register はクレデンシャルやら本人による認証を通じた認証済みアイデンティティの作成やらとは完全に独立して存在しうることに注意してください。たとえば、顧客データベースなどというものは、典型的なIdentity Registerです。これの管理も広義のアイデンティティ管理の範疇に属します。

一方、こうして作られた認証済みアイデンティティを利用するひとも居ます。他者にidentity情報を依存するので、Relying Party(RP)と呼ばれます。また、これが、本人や第三者に対してサービスを提供するということに着目した場合には、Service Provider(SP)とも呼ばれます。RPは受け取った認証済みアイデンティティの信頼性や有効性を署名などから確認してから利用します。

IIPとRPの間で情報の要求・応答を行うプロトコルのことを、Identity Federation Protocol といいます。わたしが仕様策定をしていたOpenID Connectは、Identity Federation Protocol の代表例になります。OpenID Connectでは、都度、必要最低限の属性情報を要求して、本人の許可のもとに、RPが利用できるようになっています。

さて、これで、アイデンティティ管理と連携をするための機能が揃いました。(ざっくりですが。細かく言うと、PDPとかPEPとかいろいろありますが、それは別の機会に譲りましょう。)問題は、この機能をどのように配置するかです。効率性、セキュリティ、プライバシー、それぞれの観点がありますが、ここではプライバシーの観点から考えたいと思います。

プライバシーの観点から考慮すべきものをまとめたものに、いわゆる「プライバシー原則」というものがあります。OECD8原則や米国のFIPPSなどが有名ですが、ここではISO/IEC 29100の原則を使って考えたいと思います。

ISO/IEC 29100の原則は以下の11個になります。

1. 同意と選択
2. 目的の正当性と規定
3. 収集の制限
4. データ最小化
5. 利用、保持、開示の制限
6. 正確性と品質
7. オープンさ、透明性、通知
8. 個人の参加とアクセス
9. 説明責任
10. 情報セキュリティ
11. プライバシー法令遵守

この中で、配置に関係してくるのが、3. 収集の制限、5. 利用、保持、開示の制限、6. 正確性と品質、です。

「3. 収集の制限」は、当該業務を行う上で必要最低限の情報しか集めてはいけないという要求です。これに合わせようとすると、identity registerに、登録の際に必要になる情報以外を集めるのは良くないということになります。したがって、identity register とその他のIIPは独立させたほうが良いということになります。一方で、reference identifier generator とidentity registerは別管理にすることも可能ではありますが、どの道identity registerにはreference identifier が入ってしまうので、同一組織で運用したほうが効率的でしょう。一方で、Identity proofing を行い、その結果をidentity registerに登録するRegistration Authority (RA)は、Identity Registerとは別組織が運営することは多いです。Identity Registerには、Identity Proofing に使った一部の情報しか収録しないような場合には、収集の制限の原則からすると分けたほうが良さそうです。ただし、Identity Lifecycleを考えると、Identity RegisterとRAはかなり緊密に運営されるべきとなります。そして、ここの緊密な関係が維持されないと、意図されない開示だとか、他のプライバシーリスクが上がってくることが想定されます。こうした観点に鑑みて、個人的にはRAとIdentity Registerは一体運営しても良いと思っています。

「5.利用、保持、開示の制限」は、データの利用は、取得した時に許可を得た目的に沿ってしか使ってはいけない、必要な範囲でしか保持してはいけない、そして同意を受けた範囲にしか開示をしてはいけないということを言っています。ということは、データは利用目的と同意に紐づけて管理されなければならないということになります。そう考えると、異なる目的のために取得したデータをごった煮にして管理するのは、なかなか難しいということになります。したがって、RPもいたずらに統合せずに、管理負荷が大きくなり過ぎないように分割して管理したほうが良いということになります。

最後に「6. 正確性と品質」です。これは、効率上可能な限り、IIAからリアルタイムに情報をとったほうが良いですね。これもまた、いたずらにIdPに情報を集めないほうが良い理由の一つになります。なので、属性はIIA毎に管理するのが良いということになるでしょう。

最後に残ったのがCSPです。CSPはIdentity Registerと一体運営するということは十分考えられます。その場合のプライバシー影響には何があるかということですが、そんなに大きなものは即座には思いつきません。一方で、柔軟性という観点では分離することも十分ありえます。分離すれば、Identity Registerが複数のCSPを使ったり、CSPが複数のidentity registerにサービス提供したりがありえるからです。

というわけで、やっとご質問への回答です。

(1) 識別子 → Identity Register+RA

(2) 本人確認基盤 → IdP+CSP

(3) 属性 → IIA/IIP

(4) サービス → RP/SP

と読み替えるならば、このように分割して分別管理するのが良さそう、ということになります。

July 23, 2014

MythicsReview of the ZS3 Storage Appliances:  Incredible Performance and Efficiencies [Technorati links]

July 23, 2014 08:39 PM

As you know, the Oracle Storage product line has recently undergone some major updates with several significant technology upgrades from the older ZFS Series.

The…

MythicsReview of the ZS3 Storage Appliances:  Incredible Performance and Efficiencies [Technorati links]

July 23, 2014 08:39 PM

As you know, the Oracle Storage product line has recently undergone some major updates with several significant technology upgrades from the older ZFS Series.

The…

Mike Jones - MicrosoftOAuth Assertions specs describing Privacy Considerations [Technorati links]

July 23, 2014 07:19 PM

OAuth logoBrian Campbell updated the OAuth Assertions specifications to add Privacy Considerations sections, responding to area director feedback. Thanks, Brian!

The specifications are available at:

HTML formatted versions are also available at:

Ian GlazerDo we have a round wheel yet? Musings on identity standards (Part 1) [Technorati links]

July 23, 2014 06:18 PM

Why do human continually seem to reinvent what they already have? Why is it that we take a reasonably functional thing and attempt to rebuild it and in doing so render that reasonably functional thing non-functional for a while? This is a pattern that is familiar. You have a working thing. You attempt to “fix” it and in doing so break it. You then properly fix it and get a slightly more functional thing in the end.

Why is it that we reinvent the wheel? Because eventually, we get a round one. Anyone who has worked on technical standards, especially identity standards, recognizes this pattern. We build reasonably workable standards only to rebuild and recast them a few years later.

We do this not because we develop some horrid allergy to angle brackets – an allergy that can only be calmed by mustache braces. This is not why we reinvent the wheel, why we revisit and rebuild our standards. Furthermore, revisiting and rebuilding standards isn’t simply a “make-work” affair for identity geeks. Nor is it an excuse to rack up frequent flyer miles.

Identity in transition

We reinvent the wheel the tasks needed of those wheels change. In IAM, the shift from SOA, SOAP, and XML to little s services, REST, and JSON was profound. And we had to stay contemporary with the way the web and developers worked. In this case, the technical load that our IAM wheels had to carry changed.

But there is a more profound change to the tasks we must perform and the loads we must transport and it too will require us to examine our standards and see if they are up to the task.

It used to be that enterprise IAM was concerned with answering did the right people get the right access. But that is increasingly not the relevant question. The question we must answer is did the right people get the right experience? And not just right people but also right “things” – did they get the experience (or data) they needed at the right time.

There is another transition underway. This transition is closely related to IAM’s transition from delivering and managing access to delivering and managing experience. We are being asked to haul more and different identities

We are pretty good as an industry at managing a reasonable number of identities each with a reasonable number of attributes. Surely, what is “reasonable” has increased over the years and it is fairly safe to say that no longer is a few million identities in a directly a big deal.

But how well will we handle things? Things will have a relatively few number of attributes. Things will produce a data stream that really interesting but their own attributes might not be that interesting. And, needless to say, there will be a completely unreasonable number of them: 20 billion? 50 billion? a whole lot of billions of them.

The transition of IAM isn’t just from managing identities of people carbon-based life forms to silicon ones. This transition also includes relationships. Today we are okay at managing a few relationships each with very few attributes. But what we as an industry must do is manage a completely unreasonable number of relationships between an unreasonable number of things and each of these relationships has a fair number of attributes of their own.

That, my friends, is a heavy load to haul. And so it is worth spending a little time considering if our identity standards wheels are round. Let’s look at 4 different areas of IAM to see if we have round wheels:

  1. Authentication
  2. Authorization
  3. Attributes
  4. User provisioning

Authentication

Overall, I’d say the authentication wheel is round. We’ve got multiple protocols, multiple standards, which is both a reflection of the complexity of the problem and the maturity of the problem. OpenID Connect needs a few more miles on the road, but by no means does this mean you shouldn’t use it today. Expect new profiles over time but you certainly can get going today. And where OpenID Connect cannot take you, trusty SAML still can.

Although authentication is okay, representing assurance isn’t. I wonder if we need to harmonize level of assurance. I also wonder if this is even possible. Knowing that a person was proofed and how they were authenticated is nice, but as Mark Diodati will be the first to tell you deployment matters. You can deploy a strong auth technology poorly and thus transform it into a weak auth system. So knowing your LOA 3 is equivalent to my LOA 2.25 might not be useful. More importantly, I wonder how small and medium sized organizations, those without a resident identity dork, figure out what LOA to require, what trust framework to use, and how to proceed. This, to me, seems like a place for the IDESG and its ilk.

And although the authentication wheel is round, that doesn’t mean it isn’t without its lumps. First, we do see some reinventing the wheel just to reinvent the wheel. OAuth A4C is simply not a fruitful activity and should be put down. Second, the fact that password vaulting exists at this point in history is an embarrassment. To be clear, I am not saying that password vaulting solutions and vendors are an embarrassment. It is the fact that we still have the need to password vault is IAM collective shame.

We have had workable authentication standards for this many years and yet we still password vault. It means that identity vendors have not done enough to enable service providers. It means that service providers still exist who do not want to operate in the best interest of their enterprise customers. At the minimum those service provider must offer a standards-based approach to authentication (and user provisioning would be nice too.)

Let me be crystal clear: if your service provider doesn’t support identity standards, that service provider is not acting in your best interest. Period.

The existence of password vaulting also means that organizations haven’t been loud enough in their demands for a better login experience. Interestingly enough, I think the need for a mobile-optimized authentication experience will force service providers hands.

I know we are all trying to kill the password but I think a more reasonable, more achievable, and more effective goal is to eliminate the need for password vaulting through the use of authentication and federated SSO standards. By 2017, if I am still saying this, our industry has failed.

Authorization

Authorization’s wheel is simultaneously over-inflated and flat. You can’t talk about authZ without talking about XACML. XACML can do anything; it really is an amazing standard. But the problem with things that allow you to do anything is that they tend to make it hard to do anything. My recommendation to the industry is to focus on the policy tools and the PAPS, not the core protocol. Now the XACML TC knows it needs to be contemporary. The work on the JSON and REST bindings is a great start to make XACML more relevant for the modern web.

What about OAuth? Certainly OAuth can be used to represent the output of authorization decisions. But to do this, in some sense, requires diving into the semantics of scopes. It requires that your partners understand what your scopes mean. Understanding of the semantics of scopes isn’t a horrible requirement, but it does require service providers have to invest time to understand that.

What about UMA? It definitely holds promise, especially when we consider the duties of all the parties involved in managing and enforcement access to resources. I really like the idea of a standard that has a profile that describes duties of the actors separate from the wireline protocol description. UMA definitely needs more miles on the road and to be perfectly honest I still have a hard time understanding it in an enterprise context. Maybe now that Eve is coming back to the product world, the community will get more UMA awesomeness.

There is another thing to think about as we study the roundness of the authorization wheel. Knowing that the load we will have to carry is a heavy one and one that includes “things” I think we need to think about how those “things” can make decisions with more autonomy. How can our authorization systems make authorization decision closer to the place of use at the time of use? I believe we need actionable relationships. Actionable relationships allow a thing or a human agent to be able to do something on my behalf without consulting a backend service. Very important in the IoT world. For more on actionable relationships, you can check out my talk on the Laws of Relationships.

Tomorrow I’ll post the rest of the talk and hopefully by Friday the video of it will be available as well.

Mike Jones - MicrosoftJWK Thumbprint spec incorporating feedback from IETF 90 [Technorati links]

July 23, 2014 03:11 PM

IETF logoI’ve updated the JSON Web Key (JWK) Thumbprint specification to incorporate the JOSE working group feedback on the -00 draft from IETF 90. The two changes were:

If a canonical JSON representation standard is ever adopted, this specification could be revised to use it, resulting in unambiguous definitions for those values (which are unlikely to ever occur in JWKs) as well. (Defining a complete canonical JSON representation is very much out of scope for this work!)

The specification is available at:

An HTML formatted version is also available at:

Kuppinger ColeOperation Emmental: another nail in the coffin of SMS-based two-factor authentication [Technorati links]

July 23, 2014 11:17 AM
In Alexei Balaganski

On Tuesday, security company Trend Micro has unveiled a long and detailed report on “Operation Emmental”, an ongoing attack on online banking sites in several countries around the world. This attack is able to bypass the popular mTAN two-factor authentication scheme, which uses SMS messages to deliver transaction authorization numbers. There are very few details revealed about the scale of the operation, but apparently the attack has been first detected in February and has affected over 30 banking institutions in Germany, Austria, Switzerland, as well as Sweden and Japan. The hackers supposedly got away with millions stolen from both consumer and commercial bank accounts.

Now, this is definitely not the first time when hackers could defeat SMS-based two-factor authentication. Trojans designed to steal mTAN codes directly from mobile phones first appeared in 2010. Contrary to a popular belief, these Trojans are not targeting only Android phones: in fact, the most widespread one, ZeuS-in-the-Mobile, has been discovered on various mobile platforms, including Android, Symbian, Blackberry and Windows Mobile. In 2012, an attack campaign dubbed “Eurograbber” has successfully stolen over 36 million euros from banks in Italy, Spain and the Netherlands. Numerous smaller-scale attacks have been uncovered by security researchers as well. So, what exactly is new and different about the Emmental attack?

First it’s necessary to explain in a few words how a typical attack like Eurograbber actually works.

  1. Using traditional methods like phishing emails or compromised web sites, hackers lure a user to click a link and download a Windows-based Trojan onto their computer. This Trojan will run in the background and wait for the user to visit their online banking site.
  2. As soon as the Trojan detects a known banking site, it will inject its own code into the web page. This code can, for example, display a “security advice” instructing the customer to enter their mobile phone number.
  3. As soon as the hackers have a phone number, an SMS message with a link to a mobile Trojan is sent to it and the customer is instructed to install the malicious SMS-grabbing app on their phone.
  4. By having both customer’s online banking PIN and SMS TAN, hackers can easily initiate a fraudulent transaction, transferring the money from customer’s account.

It’s quite obvious that such a scheme can only work when both PC and mobile Trojans operate in parallel, coordinating their actions through a C&C server run by hackers. This means that it can also be relatively easily disrupted simply by using an antivirus, which would detect and disable the Trojan. Another method is deploying special software on the banking site, which detects and prevents web page injections.

The hackers behind the Emmental attack are using a different approach. Instead of delivering a Trojan to a customer’s computer, they are using a small agent that masks as a Windows updater. Upon start, this program makes changes to local DNS settings, replacing IP addresses of known online banking sites with the address of a server controlled by hackers. Additionally, it installs a new root SSL certificate, which forces browsers to consider this hacked server a trusted one. After that, the program deletes itself, leaving no traces of malware on the computer.

The rest of the attack is similar to the one described above, but with a twist: the user never connects to the real banking site again, all communications will take place with the fraudulent server. This deception can continue for a long time, and only after receiving a monthly statement from the bank the user would find out that their account has been cleared of all money.

In other words, while Emmental is not the first attack on mTAN infrastructure, it’s an important milestone demonstrating that hackers are actively working on new methods of defeating it, and that existing solutions that are supposed to make banks more resilient against this type of attack are much less effective than believed. SMS-based two-factor authentication has been compromised and should no longer be considered a strong authentication method. The market already offers a broad range of solutions from smartcards and OTP tokens to Mobile ID and smartphone apps. It’s really time to move on.

July 21, 2014

Kaliya Hamlin - Identity WomanResources for HopeX Talk. [Technorati links]

July 21, 2014 03:11 PM

I accepted an invitation from Aestetix to present with him at HopeX (10).

It was a follow-on talk to his Hope 9 presentation that was on #nymwars.

He is on the volunteer staff of the HopeX conference and was on the press team that helped handle all the press that came for the Ellsberg - Snowden conversation that happened mid-day Saturday.  It was amazing and it went over an hour - so our talk that was already at 11pm (yes) was scheduled to start at midnight.

Here are the slides for it - I modified them enough that they make sense if you just read them.  My hope is that we explain NSTIC, how it works and the opportunity to get involved to actively shape the protocols and policies maintained.

Hope x talk from Kaliya
I am going to put the links about joining the IDESG up front. Cause that was our intention in giving the talk to encourage folks coming to HopeX to get involved to ensure that the technologies and policies for for citizens to use verified identity online when it is appropriate and also most importantly make SURE that the freedom to be anonymous and pseudonymous online.
This image is SOOO important I'm pulling it out and putting it here in the resources list.

WhereisNSTIC

Given that there is like 100 active people within the organization known as the Identity Ecosystem Steering Group as called for in the National Strategy for Trusted Identities in Cyberspace published by the White House and signed by president Obama in April 2011 that originated from the Cyberspace Policy Review that was done just after he came into office in 2009. Here is the website for the National Program Office.

The organization's website is here:  ID Ecosystem - we have just become an independent organization.

My step by step instructions How to JOIN.

Information on the committees - the one that has the most potential to shape the future is the Trust Framework and Trust Mark Committee

 

From the Top of the Talk

Links to us:
Aestetix -  @aestetix Nym Rights
Kaliya - @identitywoman  -  my blog identitywoman.net

Aestetix - background + intro #nymwars from Hope 9

     Aestetix's links will be up here within 24h
We mentioned Terms and Conditions May Apply - follows Mark Zuckerberg at the end.

Kaliya  background + intro

I have had my identity woman blog for almost 10 years  as an Independent Advocate for the Rights and Dignity of our Digital Selves. Saving the world with User-Centric Identity

In the early 2000’s I was working on developing distributed Social Networks  for Transformation.
I got into technology via Planetwork and its conference in 2000 themed: Global Ecology and Information Technology.  They had a think tank following that event and then published in 2003 the Augmented Social Network: Building Identity and Trust into the Next Generation Internet.
The ASN and the idea that user-centric identity based on open standards were essential - all made sense to me - that the future of identity online - our freedom to connect and organize was determined by the protocols.  The future is socially constructed and we get to MAKE the protocols . . . and without open protocols for digital identity our ID's will be owned by commercial entities - the situation we are in now.
Protocols are Political - this book articulates this - Protocols: How Control Exists after Decentralization by Alexander R. Galloway. I excerpted key concepts of Protocol on my blog in my NSTIC Governance Notice of Inquiry.
I c0-founded the Internet Identity Workshop in 2005 with Doc Searls and Phil Windley.  We are coming up on number 19 the last week of October in Mountain View and number 20 the third week of April 2015.
I founded the Personal Data Ecosystem Consortium in 2010 with the goal to connect start-ups around the world building tools for individual collect manage and get value from their personal data along with fostering ethical data markets.  The World Economic Forum has done work on this (I have contributed to this work) with their Rethinking Personal Data Project.
I am shifting out of running PDEC to Co-CEO with my partner William Dyson of a company in the field The Leola Group.

NSTIC

Aestetix and I met just after his talk at HOPE 9 around the #nymwars (we were both suspended.
So where did NSTIC come from? The Cyberspace Policy Review in 2009 just after Obama came into office.
Near-Term Action Plan:
#10 Build a cybersecurity-based identity management vision and strategy that addresses privacy and civil liberties interests, leveraging privacy-enhancing technologies for the Nation.
Mid-Term Action Plan:
#13 Implement, for high-value activities (e.g., the Smart Grid), an opt-in array of interoperable identity management systems to build trust for online transactions and to enhance privacy.
NSTIC was published in 2011: Main Document - PDF  announcement on White House Blog.
Trust Frameworks  are at the heart of what they want to develop to figure out how navigate how things work.
MY POST the Trouble with Trust and the Case for Accountability Frameworks.
What will happen with results of this effort?
The Cyber Security Framework  (paperObama Administration just outlined . NSTIC is not discussed in the framework itself – but both it and the IDESG figure prominently in the Roadmap that was released as a companion to the Framework.  The Roadmap highlights authentication as the first of nine different, high-priority “areas of improvement” that need to be addressed through future collaboration with particular sectors and standards-developing organizations.

The inadequacy of passwords for authentication was a key driver behind the 2011 issuance of the National Strategy for Trusted Identities in Cyberspace (NSTIC), which calls upon the private sector to collaborate on development of an Identity Ecosystem that raises the level of trust associated with the identities of individuals, organizations, networks, services, and devices online.

The National Program Office was launched in  January 2012 and Jeremy Grant leads it.  You can read Commerce Secretary Locke comments at the announcement at Stanford.
I wrote this article just afterwards: National! Identity! Cyberspace! Why we shouldn't Freak out about NSTIC   (it looks blank - scroll down).
Aaron Titus writes a similar post explaining more about NSTIC relative to the concerns arising online about the fears this is a National ID.
Staff for National Program Office

The put out a Notice of Inquiry - to figure out How this Ecosystem should be governed.

Many people responded to the NOI - here are all of them.

I wrote a response to the NSTIC Notice of Inquiry about Governance.  This covers that covers much of the history of the user-centric community  my vision of how to grow consensus. Most important for my NSTIC candidacy are the chapters about citizen's engagement in the systems co-authored with Tom Atlee the author of the Tao of Democracy and the just published Empowering Public Wisdom.

The NPO hosted a workshop on Governance,  another one Privacy - that they invited me to present on the Personal Data Ecosystem.  The technology conference got folded into IIW in the fall of 2011.

OReilly Radar - called it The Manhattan Project for online identity.

The National Program Office published a proposed:

Charter for the  IDESG Organization

ByLaws  and Rules of Association for the IDESG Organization

Also what committees should exist and how it would all work in this webinar presentation.  The Recommended Structure is on slide 6.  They also proposed a standing committee on privacy as part of the IDESG.

THEN (because they were so serious about private sector leadership) they published a proposed 2 year work plan.  BEFORE the first Plenary meeting in Chicago in August 2012

They put out a bid for a Secretariat to support the forthcoming organization and awarded it to a company called Trusted Federal Systems.
The plenary was and is open - to anyone and any organization from any where in the world. It is still open to anyone. You can join by following the steps on my blog post about it.
At the first meeting in August 2012 the management council was elected. The committees they decided should exist ahead of time had meetings.
The committees - You can join them - I have a whole post about the committees so you can adopt one.

Nym Issues!!!

So after the #nymwars it seemed really important to bring the issues around Nym Rights and Issues into NSTIC - IDESG.  They were confused - even though their bylaws say that committees. I supported Aestetix writing out a charter for a new committee - I read it for the plenary in November of 2012 - he attended the Feb 2013 Pleanary in Pheonix. I worked with several other Nym folks to attend the meeting too.
They suggested that NymRights was to confrontational a name so we agreed that Nym Issues would be a fine name. They also wanted to make sure that it would just become a sub-committee of the Privacy Committee.
It made sense to organize "outside" the organization so we created NymRights.
Basically the committee and its efforts have been stalled in limbo.
        Aestetix's links will be up here within 24h

The Pilot Grants from the NPO

Links
Year 1 - announcement about the FFO , potential applicant Webinar - announcement about all the grantees and an FAQ.

Year 2 - announcement about the FFO, potential applicant webinar, annoucement about the grantees.

Year 3 - ? announcement about FFO - grantees still being determined.

Big Issues with IDESG

Diversity and Inclusion

I have been raising these issues from its inception (pre-inception in fact I wrote about them in my NOI).

I was unsure if I would run for the management council again -  I wrote a blog post about these concerns that apparently made the NPO very upset.  I was subsequently "univited" to the International ID Conf they were hosting at the White House Conference Center for other western liberal democracies trying to solve these problems.

Tech President Covered the issues and did REAL REPORTING about what is going on.  In Obama Administration's People Powered Digital Security Initiative, There's Lots of Security, Fewer People.

This in contrast to a wave of hysterical posts about National Online ID pilots being launched.

They IDESG have Issues with how the process happens. It is super TIME INTENSIVE.  It is not well designed so that people with limited time can get involved.  We have an opportunity to change tings becoming our own organization.

The 9th Plenary Schedule - can be seen here.  There was a panel on the first day with representatives who said that people like them and others from other different communities needed to be involved AS the policy is made.  Representatives from these groups were on the panel and it was facilitated by Jim Barnett from the AARP.

The Video is available online.

The "NEW" IDESG

The organization is shifting from being a government initiative to being one that is its own independent organization.

The main work where the TRUST FRAMEWORKS are being developed is in the Trust Framework and Trust Mark Committee.  You can see their presentation from the last committee here.

 

Key Words & Key Concept form the Identity Battlefield

Trust

What is Identity?  Its Socially Constructed and Contextual

Identity is Subjective

Aestetix's links will be up here within 24h

What are Identifiers?: Pointers to things within particular contexts.

Abrahamic Cultural Frame for Identity / Identifiers

Relational  Cultural Frame for Identity / Identifiers

What does Industry mean when it says "Trusted Identities"?

What is Verified?

AirBnB
Verified ID in the context of the Identity Spectrum : My post about the spectrum.

Reputation

In Conclusion: HOPE!

We won the #nymwars!

Links to Google's apology.

Skud's the Apology we hopped for.

More of Aestetix's links will be up here within 24h

The BC Government's Triple Blind System

Article about & the system  they have created and the citizen engagement process to get citizen buy-in - with 36 randomly selected citizens to develop future policy recommendations for it.

Article about what they have rolled out in Government Technology.

Join the Identity Ecosystem Steering Group

Get engaged in the process to make sure we maintain the freedom to be anonymous and pseudonymous online.

Attend the next  (10th) Plenary in mid-September in Tampa at the Biometrics Conference

Join Nym Rights group.

http://www.nymrights.org

Come to the Internet Identity Workshop

Number 19 - Last week of October - Registration Open

Number 20 - Third week of April

 

 

 

 

 

 

KatasoftHosted Login for Modern Web Apps [Technorati links]

July 21, 2014 03:00 PM

Hosted Login from Stormpath

It’s no big secret: if you’re not using SaaS products to build your next great app, you’re wasting a lot of time.

Seasoned web developers have learned to solve common (i.e. annoying) problems with packaged solutions. If you’re really badass, your latest app is a symphony of amazing services, not a monolithic codebase that suffers from Not Invented Here.

But I’m gonna put money on this: you’re still building your login and registration forms from scratch and maintaining your own user database.

Why do we build login from scratch?

I have a few hypotheses on this, but one always seems to be true: user systems are the first thing we do after we master the Todo demo app. It’s fun, it’s a feature and we feel like we’ve accomplished something. Eventually we learn that there are a lot of things you can get wrong:

I could go on, but you already know. We commit these sins in the spirit of Ship It!.

Sometimes we use a framework like Rails, Express or Django and avoid most of these pitfalls by using their configurable user components. But we’re trying to get to App Nirvana, we want fewer concrete dependencies, less configuration, fewer resources to provision.

Login as a Service

What if you could send your user to a magical place, where they prove their identity and return to you authenticated?

Announcing Hosted Login – our latest offering from Stormpath!

With Hosted Login you simply redirect the user to a Stormpath-hosted login page, powered by our ID Site service. We handle all the authentication and send users back to your application with an Identity Assertion. This assertion contains all the information you need to get on with your business logic.

And the best part? Very minimal contact with your backend application. In fact, just two lines of code (using our SDKs):

And with that.. your entire user system is now completely service-oriented. No more framework mashing, no more resource provisioning. Oh, did we mention that’s beautiful as well? That’s right: if you don’t want to do any frontend work either, you can just use our default screens:

Screenshot

What problems does it solve?

Hosted Login solves a lot of the problems that are sacrificed in the name of Ship It, plus a few you may not have thought of:

Customization

While we provide default screens for hosted login, you can fully customize your user experience. Just create a Github repository for your ID Site assets and give us the Github URL! We’ll import the files into our CDN for fast access and serve your custom login pages instead of our default.

To customize your hosted login pages, you’ll want to use Stormpath.js, a small library that I’ve written just for this purpose. It gives you easy access to the API for ID Site and at ~5k minified it won’t break the bank.

For more information on this feature please refer to our in-depth document: Using Stormpath’s ID Site to Host your User Management UI

We’d love to know how you find Hosted Login! Feel free to tweet us at @gostormpath or contact me directly via robert@stormpath.com

CourionExtending IAM into the Cloud [Technorati links]

July 21, 2014 02:12 PM

Access Risk Management Blog | Courion

describe the imageYour data is everywhere. And so are your applications. In the past, everything resided in the data center, but today they're stored in the cloud, by a partner (MSP), and even running on mobile devices.

Your customers, partners and employees are also everywhere. As a security professional, you need to ensure that the right people have access to the right data and are doing the right things with it. That's where Intelligent Identity Access Management comes in. But in the era of cloud-computing, who knows where the data physically resides? And with users and accounts spread around the globe, how can you ensure the data is being accessed by the right people, according to your policies? Again, that's where Intelligent Identity Access Management is crucial.

If your data were just centrally located and being accessed by individuals and devices that you manage, traditional IAM solutions work well. But that's probably not the case. You have data in internal and outsourced systems. Some of the outsourced systems may be wholly controlled by your contracts, while others may be shared among thousands of other organizations. And that data is being accessed by employees, partners and customers from their homes, phones and tablets, on planes trains and automobiles.

From a security perspective, it's imperative to provision, govern and monitor information access wherever that information resides and however it's being accessed, whether those are physically in your IT environment or in the cloud. So what are your options?

Options for Provisioning, Governance and Monitoring in the Cloud

Two obvious questions are "where's my IAM solution?" and "where's my data?" After all, both must reside somewhere and be secured. If we constrain the answers to those questions to "on premise" or "in the cloud", we have four options.

1. Host internally, manage internal applications

Traditional IAM solutions reside on IT managed hardware within an enterprise. They're typically located in a server room where they can be physically controlled by IT. They are configured to manage applications that also reside on servers physically controlled by IT. This is a largely closed system, with the administrative control and the application resources both co-located within IT. It makes security simpler, but in the era of cloud computing, is becoming increasingly rare.

2. Host internally, manage internal and cloud-based applications

As enterprise applications have migrated outside of the data center, the need to manage those applications has fallen to traditional IAM solutions. IAM vendors like Courion have evolved their suites to natively connect to cloud-based systems from an on premise administration point. Existing "connector libraries" have been extended to include connectors to cloud-based systems. These new connectors sit side-by-side with existing on premise connectors and reach out to cloud applications.

This evolution has been largely seamless, as the same architecture used for managing internal resources has been applied to external, cloud-based resources. The protocols change, like using SOAP over HTTP rather than files over SMB, or RESTful web services rather than SOAP, but the architecture and techniques survived.

3. Host in the cloud, manage internal and cloud-based applications

Just as enterprise applications are now hosted in the cloud, there is increasing interest in hosting security systems in the cloud. This enables enterprises to focus on their core competencies rather than security management and identity management, while at the same time optimizing CapEx for OpEx expenditures.

Early experiments are promising, with IAM solutions providing tunneling capabilities from cloud-based infrastructure. Tunneling can be through VPNs, reverse proxies or dedicated appliances. Over time, this will likely become the preferred deployment option.

4. Host in the cloud; manage cloud-based applications

If an enterprise has no data in house, then a pure cloud-based solution is ideal. Operating on Office 365 + SalesForce + ADP, a cloud-based IAM solution can effectively provision and govern cloud-based applications. This scenario eliminates the complexity and cost of network tunneling solutions since everything is natively in the cloud. Here, the protocols are rapidly standardizing on RESTFul web services, with common token-based security and federation. However, like the all-internal scenario, all-cloud environments are rare.

Hybrid – the viable solution

Of these options, only two are typically feasible, since most organizations have some data on premise and some in the cloud. There are exceptions, like a startup which is native-cloud or in certain government situations, but in general, a hybrid solution is required. Choosing between the 2nd and 3rd option described above, whether you host your IAM solution in the cloud or host it internally, comes down to a deployment choice.

Courion has customers who are doing each. Most run our IAM solution on premise, while some use deployment in the cloud. For cloud deployments, most choose private cloud infrastructure, while some go for public infrastructure. But the predominant approach, even in 2014, is to deploy on premise. This is chiefly because most data still resides locally, so most applications reside locally, tilting the equation to an internally hosted IAM solution. As more enterprise applications migrate to the cloud, the decision to host the Courion suite in the cloud will likely shift.

Unlike enterprise data however, people have already shifted to the cloud. Mobile devices, from phones to tablets, are the norm. Most organizations provide secure access to critical systems on a 7x24 basis, to individuals located on premise and on the go. So parts of your IAM infrastructure must be either in the cloud, or on the edge (DMZ).

Again, Courion solutions are well suited for this shift. The most common security transaction, other than login, is the humble Password Reset. This must be accessible from anywhere and must be very reliable. It's required from the road, at night, on weekends and 2 minutes before the big sales presentation. Courion customers have hosted their password reset infrastructure in the DMZ for exactly this purpose. In addition, the Courion suite is tooled with a clean interface so customers, partners and employees are met with a consumer-grade experience, accessible on their laptop, tablet or phone.

As your data and apps move to the cloud, so do your identity repositories and access control models, as mentioned earlier. Your IAM solution can span both, but it's still advantageous to consolidate identities and provide a more seamless and simple sign on experience for customers, partners and employees. Enter Ping Identity, another cloud app that integrates with Courion solutions. Just as we expanded to cloud apps as they entered the business, a strong partnership allows for seamless integration with Ping to offer federation and SSO capabilities.

Single Sign On (SSO) impacts the decision of where to deploy an IAM solution. While IAM can provision, govern and monitor access applications in cloud-based and on premise environments, SSO systems provide seamless application login and access to the user community. By coupling the flexibility of Courion's industry leading IAM solution with the SSO and federation capabilities of Ping, organizations can manage access across all of their applications. Because both products leverage a common structure with Active Directory, the result is great experience for the end user and a manageable system for IT.

Conclusion

As the computing world shifts to the cloud, with consumer-grade technology leading the enterprise, our customers, partners and employees expect great access to information. As security professionals, our job is to balance "great" access with "secure" access. We make choices every day in choosing the solutions we deploy and the infrastructure on which it resides. Courion is here to help.

blog.courion.com

Ludovic Poitou - ForgeRockWhat we build at ForgeRock… [Technorati links]

July 21, 2014 10:43 AM

Since I’ve started working at ForgeRock, I’ve had hard times to explain to my non-technical relatives and friends, what we were building. But those days are over.

Thanks to our Marketing department, I can now refer them to our “ForgeRock Story” video :


Filed under: Identity Tagged: ForgeRock, iam, identity, IRM, opensource, security, video

Julian BondApparently the CMax II is for sale. [Technorati links]

July 21, 2014 09:35 AM
Apparently the CMax II is for sale.
http://bikeweb.com/files/images/cmax%20leaves%203%20small.preview.jpg

Sale details here. http://bikeweb.com/node/2909
Bike details here: http://bikeweb.com/image/tid/114

T-Max III, Volvo seat, occasional 2 seater and large luggage area. Faster (maybe!), safer, warmer, more comfortable than a conventional T-Max.

Not sure I can afford it. It's likely to be priced to reflect the work rather than cheap because it's unusual.
 bikeweb.com/files/images/cmax%20leaves%203%20small.preview.jpg »

[from: Google+ Posts]
July 19, 2014

Eve MalerA new identity relationship [Technorati links]

July 19, 2014 06:25 PM

I’ve been writing on this blog about identity and relationships for a long time (some samples…). Now I’ve forged (see what I did there?) a new relationship, and have joined ForgeRock’s Office of the CTO. Check out my first post on the ForgeRock blog. I’m really psyched about this company and my new opportunities to make cool Identity Relationship Management progress there. And I’ve found a lot of fellow rock ‘n’ rollers and Scotch drinkers in residence too — apparently that’s something of a job requirement for me, as many of my dear friends and erstwhile colleagues at Forrester have similar habits!

My new blogging goal is to add some pointers here to my ForgeRock posts, and — hopefully — to blog here more often than I had been in recent years. (Maybe some fresh nutrition-blogging?)

See the icons in the About Me section to the right. If you’re an old friend, stay in touch, and if we haven’t met yet, you can use the links to see about forging a new online relationship.

Anil JohnWhat are KBA Metrics? [Technorati links]

July 19, 2014 01:15 PM

There is currently a discussion going on in the Identity Ecosystem Steering Group (IDESG) regarding knowledge based authentication (KBA) metrics. I am a bit unsure about what is being sought by the IDESG from a standards development organization (SDO). This blog post is an attempt at framing the questions, as I understand them, to determine if there is value here, or if it is the application of makeup to porcine livestock.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Julian Bond [Technorati links]

July 19, 2014 08:12 AM
July 18, 2014

ForgeRockThe care and feeding of online relationships [Technorati links]

July 18, 2014 06:32 PM

I’m really excited to join ForgeRock! ForgeRock is doing amazing work around identity relationship management, and relationships — secure, identity-enabled, privacy-respecting, data-sharing, network-connected — are near and dear to my heart. (You didn’t think I was talking about Tinder, did you?)

My new role involves driving innovation for the ForgeRock Open Identity Stack, and in just a few short days I’ve already had mind-blowing conversations with my new colleagues about ways we can enable and enhance lots of types of relationships through the OIS. For one, take Scott McNealy’s invocation of “who’s who, what’s what, and who gets access to what” — we know it applies to organizations needing to control access; of course, it’s critically important to every organization to achieve this goal. We’re working on extending this privilege even to consumers who use systems fueled by ForgeRock, so that these individuals have a say in their own digital-footprint destinies. (Admittedly, a Tinder-like use case did come up in discussions today…) For those of you who have followed my work on User-Managed Access for while, yes, UMA will play a part in this story.

Having jumped in with both feet, I’m getting some chances to represent ForgeRock at events in the very near term. For starters, I’ll be speaking at SecureCIO in San Francisco this Friday, and will have the pleasure of joining my colleague Allan Foster for a talk at the Cloud Identity Summit next week. And then there’s our July 29 webinar with my alma mater Forrester Research about adding relationship management to identity — hope you’ll register and join us!

The post The care and feeding of online relationships appeared first on ForgeRock.

Kuppinger Cole05.02.2015: Cloud Compliance & Datenschutz [Technorati links]

July 18, 2014 04:50 PM
In KuppingerCole

Dieses Seminar vermittelt Ihnen die grundlegenden und brachenspezifischen Regelungen für Ihre Cloud-Strategie und informiert Sie über die heutigen und künftigen Anforderungen an Datensicherheit und Datenschutz.

Sie tragen Verantwortung für die Planung, Einführung und das Management von Cloud Services in Ihrem Unternehmen? Dann wird dieses Seminar alle Ihre offenen Fragen zum Thema Compliance und Datenschutz beantworten.
more

Julian BondI hate it when good services on the internet go dark and disappear. [Technorati links]

July 18, 2014 04:43 PM
I hate it when good services on the internet go dark and disappear.

There used to be a wonderful tool for exploring music space at http://audiomap.tuneglue.net/ It gathered data from last.fm and Discogs about related artists and presented it in a Java applet spider diagram.

Now it redirects to an EMI Hosting holding page and that sucks.

There's analternative one here http://www.liveplasma.com/ that's not bad but it's not the same.
 EMI Hosting »

[from: Google+ Posts]

Kuppinger Cole13.11.2014: Cloud Compliance & Datenschutz [Technorati links]

July 18, 2014 04:42 PM
In KuppingerCole

Dieses Seminar vermittelt Ihnen die grundlegenden und brachenspezifischen Regelungen für Ihre Cloud-Strategie und informiert Sie über die heutigen und künftigen Anforderungen an Datensicherheit und Datenschutz.

Sie tragen Verantwortung für die Planung, Einführung und das Management von Cloud Services in Ihrem Unternehmen? Dann wird dieses Seminar alle Ihre offenen Fragen zum Thema Compliance und Datenschutz beantworten.
more

Kuppinger Cole02.02.2015: Big Data für die Informationssicherheit [Technorati links]

July 18, 2014 04:22 PM
In KuppingerCole

Realtime Security Analytics: Worauf Sie beim Einstieg achten müssen.

Erhalten Sie einen Überblick zur Echtzeit-Überwachung mit Hilfe von Big Data Tools und lernen Sie wie Sie die datenschutzrechtlichen Regulatorien im Kontext der Netzwerküberwachung einhalten.
more

Kuppinger Cole12.11.2014: Big Data für die Informationssicherheit [Technorati links]

July 18, 2014 04:05 PM
In KuppingerCole

Realtime Security Analytics: Worauf Sie beim Einstieg achten müssen.

Erhalten Sie einen Überblick zur Echtzeit-Überwachung mit Hilfe von Big Data Tools und lernen Sie wie Sie die datenschutzrechtlichen Regulatorien im Kontext der Netzwerküberwachung einhalten.
more

Kuppinger ColeWhat’s the deal with the IBM/Apple deal? [Technorati links]

July 18, 2014 10:58 AM
In Alexei Balaganski

So, unless you’ve been hiding under a rock this week, you’ve definitely heard about a historical global partnership deal forged between IBM and Apple this Tuesday. The whole Internet’s been abuzz for the last few days, discussing what long-term benefits the partnership will bring to both parties, as well as guessing who will be the competitors that will suffer the most from it.

Different publications would name Microsoft, Google, Oracle, SAP, Salesforce and even Blackberry as the companies that the deal was primary targeted against. Well, at least for BlackBerry this could indeed be one of the last nails in the coffin, as their shares have plummeted after the announcement and the trend seems to be long-term. IBM’s and Apple’s shares rose unsurprisingly, however, financial analysts don’t seem to be too impressed (in fact, some recommend selling IBM stocks). This is, however, not the point of my post.

Apple and IBM have a history of bitter rivalry. 30 years ago, when Apple unveiled their legendary Big Brother commercial, it was a tiny contender against IBM’s domination on the PC market. How times have changed! Apple has since grown into the largest player on mobile device market with market capitalization several times larger than IBM’s. IBM has sold their PC hardware business to Lenovo years ago and is currently concentrated on enterprise software, cloud infrastructure and big data analytics and consulting businesses. So, they are no competitors anymore, but can we really consider them equal partners? Apple’s cash reserves continue to grow, and IBM’s revenues have been declining over the last two years. After losing a $600M contract with US government to AWS last year, a partnership with Apple is a welcome change for them.

So, what’s in this deal, anyway? In short, it includes the following:

For Apple, this deal marks their renewed attempt to get a better hold of the enterprise market. It’s well known that Apple has never been successful in this, and whether it was because of ignoring enterprise needs or simply because of inability to develop the necessary services in-house, can be debated. This time, however, Apple is bringing a partner with a lot of experience and a large portfolio of existing enterprise services (notorious, however for their consistently bad user experience). Could an exclusive combination of a new shiny mobile UI with a proven third party backend finally change the market situation in Apple’s favor? Personally, I’m somewhat skeptical: although a better user experience does increase productivity and would be a welcome change for many enterprises, we’re still far away from a mobile-only world, and UI consistency across mobile and desktop platforms is a more important factor than a shiny design. In any case, the biggest thing that matters for Apple is the possibility to sell more devices.

For IBM the deal looks even less transparent. Granted, we do not know the financial details, but judging by how vehemently their announcement stated that they are “not just a channel partner for Apple”, many analysts do suspect that reselling Apple devices could be a substantial part of IBM’s profit from the partnership. Another important point is, of course, that IBM cannot afford to maintain a truly exclusive iOS-only platform. Sure, iOS is still a dominant platform on the market, but its share is far from 100%. Actually, it is already decreasing and will probably continue to decrease in the future, as other platforms will gain their market shares. Android’s been growing steadily during the last year, and it’s definitely too early to dismiss Windows Phone (remember how people were trying to dismiss Xbox years ago?). So, IBM must continue to support all other platforms with their products such as MaaS360 and can only rely on additional services to support the notion of iOS exclusivity. In any case, the partnership will definitely bring new revenue from consulting, support and cloud services, however it’s not easy to say how much Apple will actually contribute to that.

So, what about the competitors? One thing that at least several publications seem to ignore is that those companies that are supposed to suffer from the new partnership are operating on several completely different markets and comparing them to each other is like comparing apples to oranges.

For example, Apple does not need IBM’s assistance to trump BlackBerry as a rival mobile device vendor. But applying the same logic to Microsoft’s Windows phone platform would be a big mistake. Surely, their current share in the mobile hardware market is quite small (not on every market, by the way: in Germany they have over 10% and growing), but to claim that Apple/IBM will drive Microsoft out of enterprise service business is simply ridiculous. In fact, Microsoft is a dominant player there with products like Office 365 and Azure Active Directory and it’s not going anywhere yet.

Apparently, SAP CEO Bill McDermott isn’t too worried about the deal as well. SAP is already offering 300 enterprise apps for iOS Platform and claims to be years ahead of its competitors in the area of analytics software.

As for Google – well, they do not make money from selling mobile devices. Everything Google does is designed to lure more users into their online ecosystem, and although Android is an important part of their strategy, it’s by no means the only one. Google services are just as readily available on Apple devices, after all.

Anyway, the most important question we should ask isn’t about Apple’s or IBM’s, but about our own strategies. Does the new IBM/Apple partnership has enough impact to make an organization reconsider its current MDM, BYOD or security strategy? And the answer is obviously “no”. BYOD is by definition heterogeneous and any solution deployed by an organization for managing mobile devices (and more importantly, access to corporate information from those devices) that’s locked on a single platform is simply not a viable option. Good design may be good business, but it is not the most important factor when the business is primarily about enterprise information management.

Kuppinger ColeExecutive View: Symantec.cloud Security Services - 70926 [Technorati links]

July 18, 2014 09:23 AM
In KuppingerCole

Symantec was founded in 1982 and has evolved to become one of the world’s largest software companies with more than 18,500 employees in more than 50 countries. Symantec provides a wide range of software and services covering security, storage and systems management for IT systems.   Symantec has a very strong reputation in the field of IT security that has been built around its technology and experience. While Symantec has a wide range of security...
more

July 17, 2014

Kantara InitiativeOpen Standards Drive Innovation – Kantara CIS Workshop [Technorati links]

July 17, 2014 09:36 PM

We have arrived at the revolution of Identity and are ready for the next installment of the Cloud Identity Summit. 

Identity Services are converging more and more with every technology and nearly every part of our lives.  Identity is moving fast and it’s not only a technical or policy discussion anymore.  As IRM notes, Identity Services are key to business development, growing revenue, and economies. But our world has also changed over the last 18 months.  Privacy and Trust are under hot debate… not to mention how they factor in to technology adoption.  We believe that Identity tech and policy standards are core to building a platform for innovation, and not just any standards but Open Standards. With transparency, openness, multi-stakholderism as core value, Open Standards are key to building trusted platforms along with the more traditional national standards.

Open Standards move faster, provide new proving ground, and, ultimately, drive innovation!

We are thrilled to have industry leaders participate in our Cloud Identity Summit Workshop . Their knowledge and expertise will be shared through the CIS event and Kantara Initiative workshop. Why should you attend? This workshop include 3 sessions discussing open standards as a driver to innovation of marketplaces.  Attendees will learn how leading organizations drive innovation through technology and standards development and partnerships. Who should attend? C-Level, Managers, Directors, Technologists, Journalists, Policy Makers and Influencers IEEE_SA_Logo 1. Open Standards Driving Innovation Presented by: the IEEE Standards Association Participants:  Allan Foster, Vice President Technology & Standards, ForgeRock:  John Fontana, PingID:  Scott Morrison, SVP and Distinguished Engineer, CA Technologies:  Dennis Brophy, Mentor Graphics Abstract:  Today, more than ever, open standards are core to unbounded market growth and success through innovation. Those involved in innovation systems—companies, research bodies, knowledge institutions, academia and standards developing communities—influence knowledge generation, diffusion and use, and shape global innovation capacity. As the global community strives to keep pace with technology expansion and to anticipate the technological, societal and cultural implications of this expansion, and as it faces the increasing interference of technology with economic, political and policy drivers, embracing a bottom-up, market driven and globally open and inclusive standards development paradigm will help ensure strong integration, interoperability and increased synergies along the innovation chain across boundaries. Globally open standardization processes and standards produced through a collective of standards bodies adhering to such principles are essential for technology advancement to ultimately benefit humanity, as the global expert communities address directly, in an open and collaborative way, such global issues of sustainability, cybersecurity, privacy, education and capacity building. Working within a set of principles that:

8 Miles Logo2. Federation Integration and Deployment of Trusted Identity Solutions Presentations by: Lena Kannappan  – 8k Miles FuGen Solutions & Ryan Fox – ID.me Abstract: Deployment integration of Identity Federations can be a challenge, but through standards setting and innovative testing that process can move much faster and bring benefits to all parties growing their respective markets and budget power.  Industry based development and application of standards helps set the industry levels for operations while testing and approval marks help support rapid on boarding of partners.  When Identity Federations and partners make use of agreed upon Open Standards the platform is created that allows innovative organizations to build new models and compelling services. Innovators can begin to leap in to new areas proving their business value and vitality. In this session leaders discuss:

mem-securekey 3. Approaches to Solving Enterprise Cybersecurity Challenges Presented by: SecureKey Participants:  Andre Boysen, EVP Marketing & Digital Identity Evangelist, SecureKey Technologies Inc.: Christine Desloges – Assistant Deputy Minister Rank,Department of Foreign Affairs, Trade and Development (DFATD): Patricia Wiebe – Director of Identity Architecture at BC Provincial Government Abstract:  There is an identity ecosystem emerging in North America that is unique in the world. It is a multi-enterprise service that is focused on making more meaningful services available online while at the same time making it easier for users to enroll, access and control information shared by these services. Things like easy access to online government services, opening a bank account on the internet, proving your identity for new services online, registering your child at school or participating in an education portal are becoming possible. The service model for the Internet is moving from app-centric to user-centric. The current password model of authentication needs to evolve. Every web service needs to make a choice between making their credentials stronger by adding multifactor authentication (BYOD) or partnering to get authentication from a trusted provider (BYOC – bring your own credential).

GluuSymplified… So long and thanks for all the fish! [Technorati links]

July 17, 2014 08:49 PM

Symplified Adieu

As many of you have heard, Symplified is exiting the access management market. The company’s founders had a long history in the single sign-on business, having founded Securant in the late nineties. Securant was acquired by RSA in September 2001, and evolved into RSA Cleartrust, which is still in production today at many organizations.

It seemed logical that the experienced team behind such a successful product would have launched an equally successful SaaS offering. I don’t know the whole back story, but many things have to align for a startup to succeed. You need good execution, but you also need a little bit of good luck.

I first ran into Symplified at the Digital Identity World in 2008 (thanks for the flying monkey!). At the next Digital Identity World, I had a long conversation with Eric Olden about utility computing. He gave me a copy of the book The Big Switch, which provided valuable evidence in my thinking about how utility computing could make sense for SSO and access management, and how lowering the price could actually expand the size of the market.

Although Gluu has many competitors, identity and access management is a very large global market, which Gluu cannot serve alone. We’re sad to see the exit of one of the early innovators who helped pave the way for a new delivery model for access management. Here at Gluu we’re grateful for Symplified’s early leadership, dedication to their customers, and management excellence.

As a small thanks and to bid farewell to one of our respected peers, I composed this haiku:

First SaaS SSO
Visionary service
Sadly, fate had other plans

Best of luck to all at the Symplified team!

Kuppinger ColeLeadership Compass: Cloud User and Access Management - 70969 [Technorati links]

July 17, 2014 09:47 AM
In KuppingerCole

Leaders in innovation, product features, and market reach for Cloud User and Access Management. Manage access of employees, business partners, and customers to Cloud services and on-premise web applications. Your compass for finding the right path in the market.


more
July 16, 2014

Julian BondWell, well. So the Myers-Briggs test is totally meaningless, unscientific bullshit. There's a surprise... [Technorati links]

July 16, 2014 05:14 PM
Well, well. So the Myers-Briggs test is totally meaningless, unscientific bullshit. There's a surprise! I wonder how many other cod-psych tests are the same and have about as much 2014 relevance as astrology or palm reading.
http://www.vox.com/2014/7/15/5881947/myers-briggs-personality-test-meaningless
via http://boingboing.net/2014/07/16/myers-briggs-personality-test.html
 Why the Myers-Briggs test is totally meaningless »
It's no more scientifically valid than a BuzzFeed quiz.

[from: Google+ Posts]

Kuppinger ColeEU-Service Level Agreements for Cloud Computing – a Legal Comment [Technorati links]

July 16, 2014 09:20 AM
In Karsten Kinast

Cloud computing allows individuals, businesses and the public sector to store their data and carry out data processing in remote data centers, saving on average 10-20%. Yet there is scope for improvement when it comes to the trust in these services.

The new EU-guidelines, developed by a Cloud Select Industry Group of the European Commission, were meant to provide reliable means and a good framework to create confidence in cloud computing services. But is it enough to provide a common set of areas that a cloud-SLA should cover and a common set of terms that can be used, as the guidelines do? Can this meet the individuals’ and business’ concerns when – or if – using cloud services?

In my opinion it does not, at least not sufficiently.

Having a closer view at the Guidelines from a legal perspective and thus concentrating on chapter 6 („Personal Data Protection Service Level Objectives Overview”), they appear to offer no tangible news. The Service Level Objectives (SLOs) that are described therein do give a detailed overview about the objectives that must be achieved by the provider of a cloud computing service. However, they lack description of useful examples and practical application. I would have imagined some kind of concrete proposals for the wording of a potential agreement. Any kind of routine concerning the procedure of creating a cloud computing service agreement would be a first step, to my mind, to increase the trust in cloud computing.

Since the guidelines fall short especially in this pragmatic aspect, their benefit in practice will be rather little.

As a suggestion for improvement one could follow the example of the ENISA „Procure Secure“-guidelines. They do focus on examples from “real life” and show what shall be comprised in a cloud computing contract. And they support cloud customers in setting up a clearly defined and practical monitoring framework, also by giving “worked examples” of common situations and best-practice solutions for each parameter suggested.

July 15, 2014

Kuppinger ColeLeadership Compass: Cloud IAM/IAG - 71121 [Technorati links]

July 15, 2014 09:39 AM
In KuppingerCole

The Cloud IAM market is currently driven by products that focus on providing Single Sign-On to various Cloud services as their major feature and business benefit. This will change, with two distinct evolutions of more advanced services forming the market: Cloud-based IAM/IAG (Identity Access Management/Governance) as an alternative to on-premise IAM suites, and Cloud IAM solutions that bring a combination of directory services, user management, and access management to the Cloud.

...
more
July 14, 2014

Kuppinger ColeAmazon Web Services: One cloud to rule them all [Technorati links]

July 14, 2014 01:23 PM
In Alexei Balaganski

Since launching its Web Services in 2006, Amazon has been steadily pushing towards global market leadership by continuously expanding the scope of their services, increasing scalability and maintaining low prices. Last week, Amazon has made another big announcement, introducing two major new services with funny names but a heavy impact on the future competition on the mobile cloud services market.

Amazon Zocalo (Spanish for “plinth”, “pedestal”) is a “fully managed, secure enterprise storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity”. In other words, it is one of the few user-facing AWS services and none other than a direct competitor to Box, Google Drive for Work and other products for enterprise document storage, sharing, and collaboration. Built on top of AWS S3 storage infrastructure, Zocalo provides a cross-platform solution (for laptops, iPads and Android tablets, including Amazon’s own Kindle Fire) for storing and accessing documents from anywhere, synchronizing files between devices, and sharing documents for review and feedback. Zocalo’s infrastructure provides at-rest and in-transit data encryption, centralized user management with Active Directory integration and, of course, ten AWS geo-regions to choose from in order to be compliant with local regulations.

Now, this does look like “another Box” at first sight, but with the ability to offer cloud resources cheaper than any other vendor, even with Zocalo’s limited feature set Amazon has all the chances to quickly gain a leading position in the market. First with Google announcing unlimited storage for their enterprise customers and now with Amazon driving prices further down, it means that cloud storage itself has very little market value left. Just being “another Box” is simply no longer sustainable, and only the biggest and those who can offer additional services on top of their storage infrastructure will survive in the long run.

Amazon Cognito (Italian for “known”) is a “simple user identity and data synchronization service that helps you securely manage and synchronize app data for your users across their mobile devices.” Cognito is a part of newly announced suite of AWS mobile services for mobile application developers, so it may not have caused a splash in the press like Zocalo, but it’s still worth mentioning here because of its potentially big impact on future mobile apps. First of all, by outsourcing identity management and profile synchronization between devices to Amazon, developers can free up resources to concentrate on the business functionality of their apps and thus bring them to market faster. Second, using the Cognito platform app developers are always working with temporary limited identities, safeguarding their AWS credentials as well as enabling uniform access control across different login providers. Thus, developers are implicitly led towards implementing security best practices in their applications.

Currently, Cognito is supporting several public identity providers, namely Amazon, Facebook and Google, however the underlying federation mechanism is standard-based (OAuth, OpenID Connect), so I cannot believe it won’t soon be extended to support enterprise identity providers as well.

Still, as much as an ex-developer in me feels excited about Cognito’s capabilities, an analyst in me cannot but think that Amazon could have gone a step further. Currently, each app vendor would maintain their own identity pool for their users. But why not give users control over their identities? Had Amazon made this additional step, it could eventually become the world’s largest Life Management Platform vendor! How’s that for an idea for Cognito 2.0?

CourionWhat Makes Intelligent IAM Intelligent [Technorati links]

July 14, 2014 01:00 PM

Access Risk Management Blog | Courion

Bill GlynnIn order to explain what makes Intelligent IAM Intelligent, we must first discuss why IAM needs to be intelligent.  Fundamentally, IAM is a resource allocation process that operates on the simple principle that people should only have access to the resources they need in order to do their job. So, basically, IAM is used to implement the Marxist philosophy, “to each according to need”. Therein lies one of the problems: without intelligence, IAM operations are inconsistent and can be easily corrupted; resulting in decreased efficiency of workers, increased risk to the corporation (more on that later) or both. The folks with the power have the ability to give some people (the privileged class, like their friends) more access than they need, while others (the exploited workers) may not have access to the resources they truly need, which leads to civil unrest and the potential collapse of corporate society as we know it.

However, given appropriate guidelines (rules) and sufficient information (knowledge), traditional IAM has evolved into an inherently intelligent process for managing resource allocation, such as Courion’s Intelligent IAM solution. On the front end, access requests are evaluated to see if they violate any business rules, such as, “If you aren’t in the Sales department, then you can’t have access to the company sales commission report.”

Such business rules combined with knowledge about the access recipients request and should receive enables the access assignment process to be an intelligent activity; ensuring that people do or don’t get access to corporate resources as determined by their functional role or their operational needs. On the back end, the entire corporate environment is continuously monitored, looking for evidence of any business rule violations.

Today’s corporations are challenged by a complex, mobile and open society; problems don’t necessarily get introduced through the front door.  Therefore, it’s critical to have an intelligent IAM system like Courion’s to both prevent problems from being created and to maintain a watchful eye and take immediate action, such as automatic notifications or even automatically disabling access or accounts should issues be discovered.Likelihood Impact Visual

As an example, Courion’s solution can easily distinguish between a company’s finance department server, which is obviously a far more sensitive resource than a Marketing department’s color printer – (unless you consider the price of replacement ink cartridges, and then it’s not so obvious.) Consequently, Courion’s Intelligent IAM solution, based upon a number of criteria, can determine who should and shouldn’t have access to such sensitive resources. This scenario alludes to a fundamental concept that guides the Courion solution: the concept of risk as it pertains to the corporation.  The system defines risk as a combination of likelihood, as in “OK, so what are the odds that will happen?”, and impact, as in, “So if it happens, how bad can it really be?” In general, a customer can configure the system to behave in accordance with their risk tolerance, which boils down to a basic question, “Just how lucky do you really feel?”

But it’s not just a pattern matching exercise based upon a bunch of If / Then conditions.  Courion’s Intelligent IAM solution not only knows which resources are more sensitive than others, but it also automatically adjusts its knowledge and its perspective over time.

As an analogy, a key isn’t necessarily an inherently sensitive resource. The risk associated with giving someone that key depends upon a variety of dynamic variables, such as who is going to get the key, what other keys may be behind the door that this key unlocks, how many other people also have a copy of this key, and exactly who are they?

So, while it may have seemed like a good idea to give Fred a key to the supply room, a week later we now know that all of Fred’s buddies also have a key to the supply room. More specifically, we know that Fred’s good friend Barney just got access to an additional key that unlocks the back door of the supply room. Consequently, the risk that the company’s expensive monogrammed tissue paper goes missing from the supply room has increased dramatically.

It’s this broad contextual view across a dynamically evolving environment, coupled with the knowledge of what is and isn’t an acceptable level of risk, and the ability to adapt its perspective to changing conditions that makes Courion’s Intelligent IAM solution such a valuable tool for ensuring appropriate access to corporate resources, such as prized paper goods.

However, perhaps one of the more subtle benefits provided by Courion’s Intelligent IAM solution is that it takes the burden off of the IT folks who no longer have to justify to angry users why their request was denied.  It now becomes a much easier conversation:

Rolling Stone Need vs Want“I’m sorry. I like you, and I feel your pain. I want to give you access to the Executive rest room, but I just don’t have that kind of power. You see, we use Courion’s Intelligent IAM solution and it can distinguish between what you want and what you need. So, it knows that you want access to the executive rest room, but it also knows that you don’t really need access to the executive rest room. It’s not like the old days when I might be persuaded to give you what you want. Even if I could give you such access, the Courion solution is always watching and it’s configured to notify the entire executive team of rule violations, and not only that, it will automatically take away your access.  It will simply lock the door. Therefore, continuing to try to open the door might be embarrassing, even for you. Why don’t you just use that nice restroom down the hall like the rest of us and then go back to your desk and listen to some music; I suggest a tune from The Rolling Stones – “You can't always get what you want, but if you try sometimes, you just might find, you get what you need.”

blog.courion.com

Kuppinger ColeExecutive View: Ergon Airlock/Medusa - 71047 [Technorati links]

July 14, 2014 07:05 AM
In KuppingerCole

Die Ergon Informatik AG ist ein in Zürich ansässiges Unternehmen. Neben einem großen Unternehmensbereich für Software-Individualentwicklungen ist Ergon schon seit vielen Jahren auch als Anbieter von Standard-Software am Markt präsent und hat eine signifikante Zahl von Kunden. Die Kernprodukte des Unternehmens sind die eng miteinander verbundenen Lösungen Airlock und Medusa. Airlock ist eine Web Application Firewall, die Web Single...
more

Kuppinger ColeExecutive View: Centrify Server Suite - 70886 [Technorati links]

July 14, 2014 06:28 AM
In KuppingerCole

Centrify is a US based Identity Management software vendor that was founded in 2004. Centrify has achieved recognition for its identity management and auditing solutions including single sign-on service for multiple devices and for cloud-based applications. The company is VC funded and has raised significant funding from a number of leading investment companies. The company as of today has more than 5,000 customers. Centrify has licensed key SaaS...
more

July 12, 2014

Anil JohnIdentity Validation as a Public Sector Digital Service? [Technorati links]

July 12, 2014 03:00 PM

I’ve written before about the role that the public sector currently has in identity establishment, but not in identity validation. This absence has led to an online ecosystem in the U.S. that depends on non-authoritative information for identity validation. These are some initial thoughts on what an attribute validation service, which provides validation of identity attributes using authoritative public sector sources, could look like.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Julian BondJohn Doran (TheQuietus editor) playing DJ in an open air car park just off Old Street. Wed evening, ... [Technorati links]

July 12, 2014 10:23 AM
John Doran (TheQuietus editor) playing DJ in an open air car park just off Old Street. Wed evening, 16 July. I think it's free but not sure.

http://thequietus.com/articles/15674-red-market-subject-wednesdays-dj-programme 

https://www.facebook.com/events/298139427021855/
http://www.subjectyourself.co.uk/
http://www.redgallerylondon.com/
 The Quietus | News | Red Market: S U B J E C T Wednesday »
The Quietus' John Doran and more announced to play open-air venue

[from: Google+ Posts]

Julian BondUtopia returns on UK Channel 4  on Monday and Tuesday (14/15 July) next week. Don't miss it. [Technorati links]

July 12, 2014 10:15 AM
Utopia returns on UK Channel 4  on Monday and Tuesday (14/15 July) next week. Don't miss it.

http://www.channel4.com/programmes/utopia
 Utopia - Channel 4 »
When a group of strangers find themselves in possession of the manuscript for a legendary graphic novel, their lives brutally implode as they are pursued by a shadowy and murderous organisation...

[from: Google+ Posts]
July 11, 2014

Nat Sakimuraベネッセ個人情報漏洩事件所感 [Technorati links]

July 11, 2014 10:40 AM

ベネッセから、子供等の個人情報が最大2070万件漏洩[1]したらしい。

これはいろいろな面で示唆深い事件だ。いくつか挙げて見よう。

これらのうち多くの点は、楠さんのブログ「ベネッセが埋めた名簿屋のミッシング・ピース[2]で指摘されているので、そちらを参照されると良いと思う。(なお、転々流通の問題は、5年前にこの記事[3]で言及しているが、ようやくちまたに意識されるようになってきたようで感慨深い。実際今回の件では、複数の事業者がこのデータを販売しているが、そのうち少なくとも2つは、同じIPアドレスにサーバを持っており事実上同一であるように思われる。5年前に指摘した、個人情報ロンダリングが行われている1つの証左とみることもできるだろう。)

以下、このうちいくつかについての簡単な所感を述べる。

個人情報保護法改正のまっただ中であること

これはねぇ、絶妙のタイミングというかなんというか。6月24日に出た大綱[4]では、継続検討課題として「いわゆる名簿屋」の問題が挙げられていたわけだが、ここの優先順位が上がるのではないかと思う。この問題は法23条2項であけられた「穴」に起因する問題(=5年前に指摘したロンダリング問題)でも有るわけで、ここは産業界からの抵抗が予想されていたところなわけだが、実際に事件が起きてしまうと、セオリー通りに穴を塞ぐという結果にならざるを得ないのではないかと思う。

子供のデータの取扱いについて

この事件はまた、子供のデータについての課題をいやが上にも提起した。もともと一部の専門家の間では指摘され続けたことだが、子供のデータというのは大人のデータのとはまた違った取り扱いが求められるものである。それにはいくつかの観点が有る。

まず第一は、そのデータ利用による直接的リスクの問題である。子供は様々な観点でみて脆弱な存在である。相手を信用して良いかどうかの分別もまだつかない。そういう子供の情報が自由に流通してしまうと、子供を騙すためにその情報を使うようなことが十分にありうる。それを考えると、大人のデータよりも慎重な取扱いが必要なはずだ。(DV被害者の住所ほどではなくても。)

その意味で、機微情報に準ずる形で、直接サービスを提供する目的のため以外の子供のデータの取得は禁止しても良いと思う。

もう一つの点は、同意の有効性に係る点である。

データの取得も利用も、本来、原則、本人の同意のもとに行われるべきなはずだ。だが、子供はこの「同意」の能力を持っていない。便宜的に親などが代理で同意するわけだが、それは本人が成人した時には不本意な同意になっている可能性が十分ある。したがって「親の同意」は、子供が同意能力を身につけるまでの「一時的同意」であることに気をつける必要がある。そう考えると、代理人による同意には必ず期限があり、その期限が来たら、同意を取りなおすかデータを削除するかするというのが自然だろう。サービス利用を停止したら削除するなども必要かもしれない。

今後この辺りはしっかり議論してコンセンサスを作っていくことが必要だろう。

実際に個人の被害が確認されていること、転々流通が確認されていること

今回の件では、実際に漏洩した人々に、それを取得した人々からセールスの連絡等が入っているようだ。ベネッセはこうした被害を抑えるために、データの回収を進めたいようだが、デジタルデータは一旦外に出ると回収はほぼ不可能であろう。では、どうしたら個人の被害を抑えられるのだろうか?

一つのアイディアとしては、こうした「行為」、この場合は

(1) 本人ないしはその代理人からの同意を取得してい無い、第三者からのデータの取得

(2) 本人ないしはその代理人からの同意を取得してい無いデータ利用

の2つを規制するというのがある。(1)が実現できると、転々流通はその段階である程度止まる。さらに、もしもその名簿が既に取得されてしまっていたとしても、(2) によって、その個人にコンタクトを取るなどはできなくなり、プライバシーの状態が改善する。

データは時間の問題で漏洩する。問題はそれでも被害が最小限に抑えられるようにすることだ。

多くのデータ漏洩は、アクセス管理がきちんとしていなかったことに起因する。今回も「グループ社員以外の内部者の持ち出し」というのだから、恐らくそういう問題があったのではないかと推察する。そこはきちんと固めなければいけないのはもちろんである。そのためにはアイデンティティ管理がその基礎となることは言うまでもない。だが、このアイデンティティとアクセス管理、多くの企業では極めてなおざりになっているのが現状である。各企業は、他山の石として、この機会に再検討すると良いだろう。たとえば、「共有アカウントなんて無いよね」とか「デフォルトのシステムアカウントなんて殺してあるよね」とかとか。セキュリティは後付ではダメだ。設計段階から入れなければいけない。セキュア・バイ・ デザイン(SBD)が重要だ。[5](同時に、プライバシー・バイ・デザイン(PbD)もまた重要、とちょっと日本人で2人目のPbDアンバサダーとして宣伝してみる。)

だが、どんなにやったとしても、それは確率を減らしているだけでゼロにしているわけではない。つまり、「漏れるか漏れないか」の問題ではなく「いつどの位漏れるか」の問題なのである。

したがって、制度を設計するときには、漏れることを前提にして、漏れても被害が最小限に抑えられるように設計しなければならない。それには、究極的には「利用方法」を規制するしか無い。

折しも、個人情報保護法の改正作業のまっただ中である。個人的希望を述べるならば、「利用方法」の規制を中心に「取得方法」「保存方法」のようなデータのライフサイクルに注目した「行為」に対する規制を中心においた体系に組み変わってほしいと思っている。

 


[1] NHK「ベネッセ 個人情報最大2070万件流出か」 (2014/7/9), http://www3.nhk.or.jp/news/html/20140709/k10015871611000.html

[2] 楠「ベネッセが埋めた名簿屋のミッシング・ピース」(2014/7/9), 雑種路線でいこう, http://d.hatena.ne.jp/mkusunok/20140709/leak#seemore

[3] 崎村「個人情報保護法ってそんなにザルなの?」(2009/11/29), .Nat Zone, http://www.sakimura.org/2009/11/656/

[4] 首相官邸「パーソナルデータの利活用に関する制度改正大綱」(2014/6/24), http://www.kantei.go.jp/jp/singi/it2/kettei/pdf/20140624/siryou5.pdf

(以下、2014/7/17追記)


[5] 今回に関して言うと、ちゃんとアクセス制御個人毎の認証はしていたようだ。個人毎のアカウントになっていたのですぐ足が付いた(7/17に逮捕)し、アクセスにあたっては、特定の部屋から(=位置認証)、特定のPCからのアクセス(=デバイス認証)し、ネットワーク越しではデータを取り出すことはできず、WSJの報道[6]によると、USBも殺してあったようだ。セキュリティ的な落ち度というと、データのアクセス権が広範過ぎたのとログ監視をしていなかったとみられることくらいか。つまり、IAMでいうところのポリシー設定の失敗、それから運用ポリシー設定および運用自体の失敗であるように見える。これらは逆に、物理的な安全対策をなまじしっかりやっていたため、それを過信した結果なのかもしれない。いわば旧来型の境界セキュリティの限界と言うべきであり、これがIdentity中心にセキュリティを再編しなければならないと言われる所以でもある。(青字部分は7/23に補記)

[6] 時事通信社「顧客DB開発に関与=知識悪用し防止措置解除—派遣SE、逮捕状請求へ・警視庁」(2014/7/17 05:30JST) , Wall Street Journal Web版 http://jp.wsj.com/news/articles/JJ10231533482860533506220261489651978215431?tesla=y&tesla=y

Mark Dixon - Oracle“Wink” at The Home Depot: Emerging #IoT Ecosystem? [Technorati links]

July 11, 2014 01:10 AM

Today, I learned from a USA Today article that The Home Depot and Amazon.com have begun to offer home automation devices that work with the Wink app and home automation Wink Hub

Boosting your home’s IQ got easier Monday as The Home Depot began selling a collection of nearly 60 gadgets that can be controlled by mobile devices, including light bulbs, lawn sprinklers and water heaters.

I quickly found that homedepot.com offers more Wink devices on line that does Amazon.com - interesting that the orange bastion of brick and mortar DIY sales seems to be besting Amazon at its own game!

I jumped in my pickup and drove to the nearest Home Depot store - and there it was – a Wink end cap, stationed right between the aisles offering water heaters and replacement toilets. The display wasn’t pretty, but it was there.  I could have loaded up a cart full of water sprinkler controllers, video cameras, door locks, smoke alarms, LED lights, motion sensors and more – all controllable via Wink. Pretty impressive, actually.

HomeDepotWink

Two things are significant here:

  1. The Wink ecosystem for connecting many devices from multiple vendors seems to be emerging more quickly than systems promised by Apple and Google.
  2. The Home Depot is the epitome of American mainstream – making it available to the common folks, not just techno-geeks.  Heck, I was in the Home Depot store three times last Saturday alone to pick up stuff. That’s mainstream.

It is going to be really interesting to see how this stuff becomes part of “The Fabric of our Lives.”

Mark Dixon - OracleThe Zen of #IoT: The Fabric of our Lives [Technorati links]

July 11, 2014 12:10 AM

Cotton

When I was a young engineering student at Brigham Young University, I had a physics professor who loved to promote what he called the “Zen of Physics.”  As I recall, he proposed that if we studied the right way and meditated the right way on the virtues of physics, we would reach a state of enlightenment about his beloved area of scientific thought.

As an engineering student more interested in practical application than theoretical science, I never did reach the level of enlightenment he hoped for, although I do remember some exciting concepts related to black holes and liquid nitrogen.

This last week, as I was pondering the merits of the Internet of Things, I had a Zen-like moment, an epiphany or moment of enlightenment of sorts, as I was mowing the lawn, of all things.

My thought at that moment?  The real value of the Internet of Things will become apparent when we find that this technology becomes woven seamlessly and invisibly into “The Fabric of our Lives.”

The Fabric of our Lives” is actually a trademark of the Cotton Industry, so I can’t claim originality, but I think the concept is interesting.  When we come to realize that technology fits us as naturally and comfortably as a favorite old cotton shirt, we tend to forget about the technology itself, but enjoy the benefits of what has slowly become an integral part of ordinary living – woven into the fabric of every day life.

When I had my little epiphany last Saturday, I had forgotten my post from April 1, 2013, entitled, “IoT – Emerging and Receding Invisibly into the Fabric of Life.”  What my Zen moment added is the idea that real value to us as humans is realized not when the first flashy headlines appear, but when the technology recedes quietly into the everyday fabric of our lives.

When I think of technology that has emerged since my childhood and then proceeded to become commonplace, I am amazed: microwave ovens, digital cameras, color television, satellite communications, cable/satellite TV, personal computers, the Internet, social media, smart phones and much more.  Each one of these progressed from being novelties or the stuff of techno-geeks to becoming mainstream threads in the everyday fabric of life.

So it will be with IoT. We talk a lot about it now.  We techno-geeks revel in the audacious beautify of it all.  Just about every publication in the world has something to say about it.  But as first a handful, and then many, of the devices and concepts become commonly accepted, they too will become invisible, but highly valuable threads woven ubiquitously into “The Fabric of our Lives.”

July 10, 2014

Kuppinger ColeIs the latest attack on energy companies the next Stuxnet? [Technorati links]

July 10, 2014 05:23 PM
In Alexei Balaganski

It really didn’t take long after my last blog post on SCADA security for an exciting new development to appear in the press. Several security vendors, including Symantec and F-Secure, have revealed new information about a hacker group “Dragonfly” (or alternatively “Energetic bear”) that has launched a massive cyber-espionage campaign against US and European companies mainly from the energy sector. Allegedly, the most recent development indicates that the hackers not just managed to compromise those companies for espionage, but possess the necessary capabilities for sabotage, disruption and damage to energy grids of several countries.

Previous reports show that the group known as “Energetic bear” has been operating since at least 2012, having highly qualified specialists based somewhere in Eastern Europe. Some experts go as far as to claim that the group has direct ties with Moscow, operating under control of the Russian secret services. So, it’s quite natural that many publications have already labeled Dragonfly as the next Stuxnet.

Now, as much as I love bold statements like this, I personally still find it difficult to believe it. I admit that I have not seen all the evidence yet, so let’s summarize what we do know already:

However, the most recent development that has brought Dragonfly into the limelight is that the group has begun distributing the malware using the “watering hole” approach. Several ICS software vendor websites have been compromised, and their software installers available for download have been infected with Havex. It’s been reported that in one case, compromised software has been downloaded at least 250 times.

Since the sites belonged to notable vendors of programmable logic controllers used in managing wind turbines and other critical equipment, there could not be any other conclusion than “Russia is attacking our energy infrastructure”, right? Or could it?

Quite frankly, I fail to see any resemblance between Stuxnet and Dragonfly at all.

Stuxnet has been a highly targeted attack created specifically for one purpose: destroy Iranian nuclear enrichment industry. It contained modules developed specifically for a particular type of SCADA hardware. It has been so complex in its structure that experts are still not done analyzing it.

Dragonfly, on the other hand, is based on existing and widely used malware tools. It’s been targeting a wide array of different organizations – current reports show that it’s managed to compromise over 1000 companies. Also, the researchers that have discovered the operation could not find any traces of PLC-controlling payloads, the only purpose of the tool appears to be intelligence gathering. The claims of ties to the Russian secret services seem to be completely unsubstantiated as well.

So, does this all mean that there is not threat to our energy infrastructures after all? Of course, it does not! If anything, the whole Dragonfly story has again demonstrated the abysmal state of information security in the Industrial Control Systems around the world. Keep in mind, this time the cause of the attack wasn’t even weak security of an energy infrastructure. Protecting your website from hacking belongs to the basic “security hygiene” norms and does not require any specialized software, a traditional antivirus and firewall would do just fine. Unfortunately, even SCADA software vendors seem to share the relaxed approach towards security typical for the industry.

The fact that the Dragonfly case have been publicized so much is actually good news, even if not all publications are up to a good journalism standard. If this publicity leads to tighter regulations for ICS vendors and increases awareness of the risks among ICS end users, we all win at the end. Well, maybe except the hackers.

Vittorio Bertocci - MicrosoftWhat’s New in ADAL v2 RC [Technorati links]

July 10, 2014 05:12 PM

ADAL v2 RC is here, and is packed with new features! Those are the last planned changes we are doing to the library surface for v2, hence you should expect this to be the harbinger of what you’ll get at GA time.

Here there’s a list of the main changes. For some of the new features, the news are so significant that I wrote an entire post just for it – watch for the links in the descriptions.

The list is pretty long! We are in the process of updating the samples on GitHub: hopefully that will help you to follow the changes. As mentioned above, we are not planning any additional changes before GA – hence you can expect the current surface to be the one that will end up in the documentation.

If you have issues migrating beta code to RC, feel free to drop me a line.

Thanks and enjoy!

Vittorio Bertocci - MicrosoftADAL v2 and Windows Integrated Authentication [Technorati links]

July 10, 2014 04:41 PM

The release candidate of ADAL v2 introduces a new, more straightforward way of leveraging Windows Integrated Authentication (WIA) for your AAD federated tenants in your Windows Store and .NET applications.

Its use is very simple. You might have read here that we model direct username/password authentication by holding those credentials in one instance of a new class, UserCredential. To take advantage of WIA you use the exact same principle: you create one empty instance of UserCredential and pass it through, that will be your way of letting ADAL know that you want to use WIA. In practice:

AuthenticationResult result = 
     authContext.AcquireToken(todoListResourceId, clientId, new UserCredential());

If you are operating from a domain joined machine, you are connected to the domain network, you are signed in as a domain user, and the tenant used in authContext is a federated tenant, you’ll get back a token right away!
Once you successfully get a token that way, ADAL will create a cache entry for it as usual; from that moment on, it won’t matter how the token was acquired – it will be handled as usual and available for lookup from any other AcquireToken overload.

Things to note:

Although it was already possible to take advantage of WIA with the older OM (by using promptbehavior.never against a federated tenant) we are confident that this simpler and more predictable mechanism is more in line with the convenience it is legitimate to expect when operating within a domain. Enjoy! Smile

Vittorio Bertocci - MicrosoftThe New Token Cache in ADAL v2 [Technorati links]

July 10, 2014 05:21 AM

image

The release candidate of ADAL v2 introduces a new cache model, which makes possible to cache tokens in middle tier apps and dramatically simplifies creating custom caches.
In this post I’ll give you  a brief introduction to the new model. You can see the new cache in action in our updates samples.

Limitations of the ADAL v1 Model

The token cache in ADAL plays a key role in keeping the programming model approachable. The fact that ADAL saves tokens (of all kinds: access, refresh, id) and token metadata (requesting client, target resource, user who obtained the token, tenant, whether the refresh token is an MRRT…) allows you to simply keep calling AcquireToken, knowing that behind the scenes ADAL will make the absolute best use of the cached information to give you the token you need while minimizing prompts.

Another role fulfilled by the ADAL cache is to offer you a view on the current authentication status of your application: by interrogating the cache you can discover whether you have access to a given resource, if you have tokens for a specific set of user accounts, and so on.

In ADAL v1, the cache implementation reflected the primary target scenarios of that version of the library: native clients. The cache implemented as an IDictionary, with a specific key type which reflected the domain-specific info necessary for handling tokens. That fulfilled the two functions outlined earlier, keeping track of the data we needed and offering an easy way of querying the collection (via LINQ). That did its job for native clients, however that was unsuitable for use on mid-tier apps. Think of a web app with few millions of users, each of them with few tokens stored for calling API in the context of their sessions – the resulting dictionary would have been pretty hard to scale. For that reason, AcquireToken* implementations in ADAL v1 meant to prevalently run on the server side did not make use of ADAL’s cache.

Another shortcoming of the v1 model was that providing a custom cache required you to implement IDictionary, not rocket science but certainly an onerous task. Furthermore: many of the elements in the key type were really meant for your own queries and were never used by AcquireToken. We were aware of the complications, but given the asymmetry between producers of custom cache implementations and consumers of such classes (the latter vastly outnumbering the former) we made the tradeoff. When this was picked up in v2, though our awesome dev team found a way of avoiding the tradeoff altogether – designing a new model that delivers on both functions AND that is a breeze to implement!

The New Cache Model

The idea of the new cache model is pretty simple: ADAL manages the cache structure as a private implementation detail, but gives you the means to 1) provide a persistence layer for it, so that you can use your favorite store to hold it and 2) it still allows you to obtain views of the cache content, so that you can gain insights on the capabilities of your application without being exposed to the internals to how the various cache entries are actually maintained. It sounds pretty awesome, right? Smile

You create a custom cache by deriving from the new TokenCache class, and passing an instance of such class at AuthenticationContext construction time. Here there’s how it looks like in VS’ Class View:

image

There’s quite a lot of stuff there, but in fact you need to touch only 3-4 things.
Here there’s how it works.

TokenCache features three notifications, BeforeAccess, BeforeWrite and AfterAccess, that are activated whenever ADAL does work against the cache. Those notifications offer you the opportunity of keeping in sync the storage of your choice with the in-memory cache that ADAL uses.

Say that you start your app for the very first time, and you make a call to AcquireToken(resource1, clientId, redirectUri). From the cache’s point of view, how does that unfold?

  1. ADAL needs to check the cache to see if there is already an access token for resource1 obtained by client1, or if there is a refresh token good for obtaining such an access token, and whatever other private heuristic you don’t need to worry about. Right before it reads the cache, ADAL calls the BeforeAccess notification. Here, you have the opportunity of retrieving your persisted cache blob from wherever you chose to save it, and pump it in ADAL. You do so by passing that blob to Deserialize.
    Note that you can apply all kind of heuristics to decide whether the existing inmemory copy is still OK to reduce the times in which you access your persistent store.
  2. As we said, this is the first time that the application runs: hence the cache will (typically) be empty. Hence, ADAL pops out the authentication UX and guides the user through the authentication experience. Once it obtains a new token, it needs to save it in the cache: but right before that, it invokes the BeforeWrite notification. That gives you the opportunity of applying whatever concurrency strategy you want to enact: for example, on a mid tier app you might decide to place a lock on your blob – so that other nodes in your farm possibly attempting a write at the same time would avoid producing conflicting updates. If you are optimistic, of course you can decide to simply do nothing heer :Winking smile
  3. After ADAL added the new token in its in-memory copy of the cache, it calls the AfterAccess notification. That notification is in fact called every time ADAL accessed the cache, not just when a write took place: however you can always tell if the current operation resulted in a cache change, as in that case the property HasStateChanged will be set to true. If that is the case, you will typically call Serialize() to get a binary blob representing the latest cache content – and save it in your storage. After that, it will be your responsibility to clear whatever lock you might have set.
    Very important: ADAL NEVER automatically reset HasStateChanged to false. You have to do it in your own code once you are satisfied that you handled the event correctly.

Those are the main moving parts you need to handle. Other important aspects to consider concern the lifecycle of the cache instance outside of its use from AcquireToken. For example, you’ll likely want to populate the cache from your store at construction time; you’ll want to override Clear and DeleteItem to ensure that you reflect cache state changes; and so on.

You might wonder why you can’t just wait for the first access and leave to the notifications to do the first initialization. That’s tricky. You could do that, but then if you’d need to query the cache before requesting the first token you’d be in trouble. Think of a multi-tenant client: on first access you’ll use common as the authority, but for subsequent accesses you want to use the authority corresponding to the user that actually signed in and initialized the app to its own tenant. If you don’t do that, you’ll never hit the cache during AcquireToken given that using “common” is equivalent to say “I don’t know what tenant to use”.

If you want to query the cache, you can call ReadItems() to pull out an IEnumerable of TokenCacheItems.

Pretty easy, right?

Here there’s one of my favorite benefits of this model: from ADAL v2 RC on, AcquireTokenByAuthorizationCode does save tokens in the cache! This will result in a tremendous reduction in the amount of code that it is necessary on middle tier applications, as you’ll see in the updated samples.

Examples

Here there’s a simple example. Say that I am writing a Windows desktop app, and I want to save tokens so that I don’t have to re-authenticate every single time I launch the application. I decide to save the tokens in a DPAPI-protected file. Here there’s a super simple cache implementation doing that:

// This is a simple persistent cache implementation for a desktop application.
// It uses DPAPI for storing tokens in a local file.
class FileCache : TokenCache
{
    public string CacheFilePath;
    private static readonly object FileLock = new object();

    // Initializes the cache against a local file.
    // If the file is already present, it loads its content in the ADAL cache
    public FileCache(string filePath=@".\TokenCache.dat")
    {
        CacheFilePath = filePath;
        this.AfterAccess = AfterAccessNotification;
        this.BeforeAccess = BeforeAccessNotification;
        lock (FileLock)
        {
            this.Deserialize(File.Exists(CacheFilePath) ? 
                ProtectedData.Unprotect(File.ReadAllBytes(CacheFilePath), 
                                        null, 
                                        DataProtectionScope.CurrentUser) 
                : null);
        }
    }

    // Empties the persistent store.
    public override void Clear()
    {
        base.Clear();
        File.Delete(CacheFilePath);
    }

    // Triggered right before ADAL needs to access the cache.
    // Reload the cache from the persistent store in case it changed since the last access.
     void BeforeAccessNotification(TokenCacheNotificationArgs args)
    {
        lock (FileLock)
        {
            this.Deserialize(File.Exists(CacheFilePath) ?  
                ProtectedData.Unprotect(File.ReadAllBytes(CacheFilePath),
                                        null,
                                        DataProtectionScope.CurrentUser) 
                : null);
        }
    }

    // Triggered right after ADAL accessed the cache.
    void AfterAccessNotification(TokenCacheNotificationArgs args)
    {
        // if the access operation resulted in a cache update
        if (this.HasStateChanged)
        {
            lock (FileLock)
            {                    
                // reflect changes in the persistent store
                File.WriteAllBytes(CacheFilePath, 
                    ProtectedData.Protect(this.Serialize(),
                                            null,
                                            DataProtectionScope.CurrentUser));
                // once the write operation took place, restore the HasStateChanged bit to false
                this.HasStateChanged = false;
            }                
        }
    }
}

That is really super-simple code, if compared to having to implement an entire IDictionary.

Want to see something a bit more challenging?

Say that I have a web application. My web app connects to web APIs on behalf of its users. Every user has a set of tokens that are saved in a SQL DB, so that when they sign in to the web app they can directly perform their web API calls without having to re-authenticate/repeat consent. The new ADAL cache model makes it pretty easy to achieve this: we can have a flat list of blobs, all representing an ADAL cache for a specific web user. When the user signs in, we retrieve the corresponding blob and use it to initialize his/her token cache. That’s exactly how we implemented the cache in the updated multitenant samples. Here there’s the implementation:

public class PerWebUserCache
{
    [Key]
    public int EntryId { get; set; }
    public string webUserUniqueId { get; set; }
    public byte[] cacheBits { get; set; }
    public DateTime LastWrite { get; set; }
}

public class EFADALTokenCache: TokenCache
{
    private TodoListWebAppContext db = new TodoListWebAppContext();
    string User;
    PerWebUserCache Cache;
    
    // constructor
    public EFADALTokenCache(string user)
    {
       // associate the cache to the current user of the web app
        User = user;
        
        this.AfterAccess = AfterAccessNotification;
        this.BeforeAccess = BeforeAccessNotification;
        this.BeforeWrite = BeforeWriteNotification;

        // look up the entry in the DB
        Cache = db.PerUserCacheList.FirstOrDefault(c => c.webUserUniqueId == User);
        // place the entry in memory
        this.Deserialize((Cache == null) ? null : Cache.cacheBits);
    }

    // clean up the DB
    public override void Clear()
    {
        base.Clear();
        foreach (var cacheEntry in db.PerUserCacheList)
            db.PerUserCacheList.Remove(cacheEntry);
        db.SaveChanges();
    }

    // Notification raised before ADAL accesses the cache.
    // This is your chance to update the in-memory copy from the DB, if the in-memory version is stale
    void BeforeAccessNotification(TokenCacheNotificationArgs args)
    {
        if (Cache == null)
        {
            // first time access
            Cache = db.PerUserCacheList.FirstOrDefault(c => c.webUserUniqueId == User);
        }
        else
        {   // retrieve last write from the DB
            var status = from e in db.PerUserCacheList
                         where (e.webUserUniqueId == User)
                         select new
                         {
                             LastWrite = e.LastWrite
                         };
            // if the in-memory copy is older than the persistent copy
            if (status.First().LastWrite > Cache.LastWrite)
            //// read from from storage, update in-memory copy
            {
                Cache = db.PerUserCacheList.FirstOrDefault(c => c.webUserUniqueId == User);
            }
        }
        this.Deserialize((Cache == null) ? null : Cache.cacheBits);
    }
    // Notification raised after ADAL accessed the cache.
    // If the HasStateChanged flag is set, ADAL changed the content of the cache
    void AfterAccessNotification(TokenCacheNotificationArgs args)
    {
        // if state changed
        if (this.HasStateChanged)
        {
            Cache = new PerWebUserCache
            {
                webUserUniqueId = User,
                cacheBits = this.Serialize(),
                LastWrite = DateTime.Now
            };
            //// update the DB and the lastwrite                
            db.Entry(Cache).State = Cache.EntryId == 0 ? EntityState.Added : EntityState.Modified;                
            db.SaveChanges();
            this.HasStateChanged = false;
        }
    }
    void BeforeWriteNotification(TokenCacheNotificationArgs args)
    {
        // if you want to ensure that no concurrent write take place, use this notification to place a lock on the entry
    }
}

Also in this case, the implementation is pretty self explanatory: I disregarded locks and only added a little timestamp check to avoid swapping potentially sizable blobs from the DB when it’s not necessary.

Wrap

I have a confession to make: although I was always bummed by the lack of a viable token caching solution on the server side, and consequent need for complex code for confidential clients, I didn’t believe that a cache redesign would have been possible so late in the cycle given the time constraints we are up against (we want to release soon!!). However the dev team was very passionate about solving that problem, and worked very hard to deliver a design that blew me away – it satisfied all requirements without affecting schedule! So big kudos to the dev team, especially to Afshin Smile

All the new features discussed here apply to both ADAL .NET and ADAL for Windows Store. However, that does NOT change the defaults: ADAL .NET OOB cache remains in-memory, ADAL for Windows Store retains its default persistent cache. All the news only apply to how you’d implement a custom cache, should you choose to write one. If you need a starting point, you can look at the above snippets. If you want to see them in action, you’ll find them (and possibly others) in our github samples.

Enjoy!

July 08, 2014

MythicsWebCenter Portal:  Building Your Own. Part 1 [Technorati links]

July 08, 2014 09:04 PM

In this blog, I will share my recent successful experience and the…

MythicsWebCenter Portal:  Building Your Own. Part 1 [Technorati links]

July 08, 2014 09:04 PM

In this blog, I will share my recent successful experience and the…

GluuAuthentication Speed Versus Flexibility: Benchmarking SSO [Technorati links]

July 08, 2014 02:49 PM

Benchmark SSO

Gluu has been working quite a bit recently on benchmarking, and the question came up whether it’s better to use the Gluu Server’s built in LDAP authentication with a custom filter, or the Jython based “Custom Authentication Interception Script.

If you are just considering throughput, the Jython script has more CPU overhead. However, it gives the organization vastly more flexibility. In the future, some organizations may support many authentication workflows. How to identify a person may vary depending on the location of the person being authenticated, and what device is in their hands. Authentication attempts provide valuable data for fraud detection, which may be exposed via API interfaces. For these cases, empowering system administrators to add business logic without having to compile, build, and deploy a war/jar file can improve security and add agility.

Another consideration for benchmarking was whether to use the Gluu Server for Session Management. The OpenID Connect specification does not require central sessions management–the session is only in the browser. In the Gluu Server, central session persistence is optional. In large deployments, its un-desirable. In smaller deployments, it can be quite useful.

In the future, we may see complimentary OpenID Connect specifications to add session management alternatives. One idea is for the OpenID Provider (“OP”) to return the logout URLs to the browser, which could then notify the back-end servers that a logout has occurred. The Gluu Server also has a “Custom Logout Interception Script” that enables the OP to insert some tactical code to ensure the cleanup of resources (for example, call the API to make sure the CA Siteminder session is ended).

In the long term, session management needs to be centralized to enable SSO where there are many autonomous websites and mobile applications. Also, extending Web SSO to mobile applications is under discussion for standardization. This is critical for IoT. For example, when I logout of my tablet, can I force a logout of my TV?

As the OP becomes smarter, there is a trade-off of speed and flexibility, hardware and functionality. Depending on your business requirements, and the number of people you are serving, you may have to make a number of hard choices.

Learn more about benchmarking the open source OX OAuth2 platform for large scale deployments.

Julian BondThe Rhesus Chart (Laundry Files) by Charles Stross [Technorati links]

July 08, 2014 08:26 AM
Ace (2014), Hardcover, 368 pages
[from: Librarything]

Vittorio Bertocci - MicrosoftUsing ADAL .NET to Authenticate Users via Username/Password [Technorati links]

July 08, 2014 07:45 AM

This might be the most requested feature for ADAL: the ability of authenticating a user by pumping in username/password, without showing any pop up. There are perfectly legitimate scenarios that require that feature; unfortunately there are also many ways in which abusing this feature might backfire.

With the RC we just released, we added this feature to ADAL .NET (but not to Windows Store or Windows Phone). We have a sample showing how to use it; here I’ll highlight the minimal syntax that lights it up, discuss some considerations about different cases,  and above all I’ll spell out limitations and warnings about what you are bargaining when you choose to go this route.

When to Use This Feature

There are a number of scenarios where the direct use of username and password is inevitable. The ones below are the ones I encountered most often.

Note that all of those scenarios could be potentially solved by Windows Integrated Auth (WIA), however not all setups can leverage that (e.g. cloud-only tenants, clients running outside of the domain, etc) hence in the below I assume we’re in such unfortunate cases.

Headless clients

Say that you are operating from a console app, running on a Server Core. There is simply no Windows manager on the box to render any UX – everything needs to take place in text.

Legacy Solutions

During the Ask the Expert night at this year’s TechEd North America I met with a gentleman who described a setup in which he was using legacy hardware already sending username and password. His investment in such clients was massive and decomissioning them (or the software running on them) was out of question. He very much wanted to move to AAD and move his backend to the cloud, but could not rely on our current credential gathering experience. The direct use of username and password will allow him to bridge his legacy solution to his new cloud based backend, secured via AAD.

Automated Testing

This is an all-time favorite of our partners within Microsoft. If I’s have a dollar for every mail/IM/hallway conversation I got about this… Smile
The scenario is simple: you have a solution based on native clients and you want to automate its verification. Existing test harnesses don’t always make it easy to automate a web based credential gathering interface, hence the request for a mechanism to easily obtain tokens in exchange for test credentials.

When NOT to Use This Feature

This is easy: in pretty much any other case! Smile Direct manipulation of credentials is a BIG responsibility that significantly grows your attack surface, is conducive of bad habits (like caching the credentials), denies you pretty much all of the advantages you get by presenting a server-driven experience (multi factor auth, consent, multi-hop federation, etc – see below) and makes your client deployments brittle.

The main anti-pattern hidden behind requests for this feature is the desire for customizing the authentication experience. I totally understand that desire, but I often get the impression that the tradeoff one makes when going that route are not always well understood. Falling back to direct credential manipulation is an awfully high price to pay: it cuts you our from a long list of features and puts both your users and your app at risk. I would rather hear your feedback about what parts of the server-provided UX you want to customize – and fiercely fight for you in shiproom to make that change happen – rather than helping you through a security crisis.

How it works

Enough with the doom & gloom, let’s take a look at some code! Winking smile
For the visitors form the future: this feature lights up for the first time in ADAL version 2.7.10707.1513-rc.

I’d daresay that the way in which this feature has been implemented fits right in in the existing, well-proven credentials model we introduced since v1 for handling client credentials flow (see this sample).
We introduced a new type, UserCredential, which represents a user credential. If you want to use username and password, you’d initialize a new instance via the following:

UserCredential uc = new UserCredential(user, password);

Where user is a string containing the UPN of the user you want to authenticate, and password is a string or a SecureString containing… well, you know.

How to you use uc for getting a token? Well, we added a couple of overloads to the AcquireToken* family:

public AuthenticationResult AcquireToken(string resource, string clientId, UserCredential userCredential);
public Task<AuthenticationResult> AcquireTokenAsync(string resource, string clientId, UserCredential userCredential);

 

The relationship between the client app and the resource is precisely the same one you learned about in all other single tenant native client->web service ADAL samples: both need to be registered, the API needs to expose at least a permission, the client needs to be configured to request such permission, and so on. Note that ehre there is no opportunity for AAD to prompt for consent, hence flows which would require it are off limits here.

Once you call one of those overloads, as long as you provided the correct credentials (and your tenant is configured correctly) you’ll get back a standard AuthenticationResult, the resulting tokens will be automatically cached, and so on.

You can get a feeling of how this all works by giving a spin to our headless native client sample on GitHub. Here there’s a screenshot of a typical run, to give you a feeling of the experience you can achieve with this flow. Party like it’s ‘95! Smile

image

To peek a bit behind the scenes, there are two main sub-scenarios:

From your code you won’t notice any difference between the two cases – I am just mentioning that so that you’re aware of what’s required for making this flow work. For example, if instead of ADFS you set up another IP that does not expose WS-Trust endpoints or does it differently from ADFS, this flow will likely fail.

Constraints & Limitations

Here there’s a list of limitations you’ll have to take into account when using this flow.

Only on .NET

Given the intended usage of this feature, we decided to add it only to .NET.

On Windows Store we added the ability to use Windows Integrated auth, which has many of the same advantages and less drawbacks. Details in another post.

No web sites/confidential clients

This is not an ADAL limitation, but an AAD setting. You can only use those flows from a native client. A confidential client, such as a web site, cannot use direct user credentials.

No MSA

Microsoft accounts that are used in the context of an AAD tenant (classic example: Azure admins) cannot authenticate to AAD via raw credentials – they MUST use the interactive flow (though the PromptBehavior.Never flag remains an option).

No MFA

Multi-factor authentication requires dynamic UX to be served on the fly – that clearly cannot happen in this flow.

No Consent

Users do not have any opportunity of providing consent if username & password are passed directly.

No multi-hop federation

Any scenario requiring home realm discovery, multiple federation hops and similar won’t work – the protocol steps are rigidly codified in the client library, with no chance for the server to dynamically influence the authentication path.

No any server side features, really

In the “traditional” AcquireToken flows you have the opportunity of injecting extra parameters that will influence the behavior of AAD – including parameters that AAD didn’t even support when the library was released. None of that is an option when using username and password directly.

Next

Direct use of username an password is a powerful feature, which enables important scenarios. However it also a bit of a Faustian pact – the price you pay for its directness is in the many limitations it entails and the reduced flexibility that it imposes on any solution relying on it.
If you are in doubt on whether this feature is right for your scenario, feel free to drop us a line!

Vittorio Bertocci - MicrosoftADAL for .NET/Windows Store/Windows Phone Is Now Open Source! [Technorati links]

July 08, 2014 06:06 AM

We’ve been saying it was coming for almost a year. With this RC preview release, it’s finally happening: ADAL for .NET/Windows Store/Windows Phone is now fully open source!

Without getting too dramatic, this truly ushers a new era of transparency and collaboration between our team and you guys – you’ll be able to:

Note that we will keep releasing new NuGet versions (stable and prerelease) at the usual location, with the usual support policies – the code is an additional way for you to get even more value from ADAL and does not substitute our usual release cycle.

As you might infer from the number of exclamation points, I am pretty excited about this! Smile
All of the above points are pretty self-explanatory, but the MyGet and VS configuration for vacuuming down the library symbols require a bit more guidance: see the instructions below.

Configuring Visual Studio 2013 to Access AAD’s MyGet Feed

Our collaboration with the ASP.NET team on the OWIN middleware components for OpenId Connect made us experience first hand how convenient it is to have a MyGet feed where we can dump nightly builds and use it as a collaboration touch point as we refine our software. Hence, we decided to extend those benefits to ADAL itself.
To configure the AAD MyGet feed in VS 2013:

…and voila’! From now on you can get the absolute freshest (and totally unsupported, BTW Smile) work-in-progress ADAL build.

Remember, for official previews and stable releases keep referring to the NuGet.org feed.

Configuring Visual Studio to Load ADAL Symbols

In my opinion, this is one of the coolest VS + NuGet features for open source projects. How many times have you wished to unpack that mysterious error and get to the bottom of what exactly is failing in that oh-so-handy-but-oh-so-black-box library you’re using? Well, now with ADAL you can! It just requires a bit of configuration.

You basically need to follow the “recommended configuration” section of this page.
Here there’s how my debugger options look like after I have done so:

image

and here there are my symbols settings:

image

Please be aware that loading up the symbol cache is doing to be quite laborious for VS, hence don’t set this thing up right before walking on stage for a demo! Smile But once you’ve got the symbols in place, you’ll be able to dig as deep as you want.

Next

We’ve been waiting for this for a long time. Today we are all very excited to start to develop ADAL for .NET/Windows Store/Windows Phone in that huuuuge open space floor that is the Internet.

As we finally reached the RC milestone, you can expect the next few weeks to be devoted to stabilization – however don’t let that stop you from toying with the source, share your ideas, and if there’s something you want to fix or contribute… hit that pull request button! Smile