October 30, 2014

Matt Flynn - NetVisionA Few Thoughts on Privacy in the Age of Social Media [Technorati links]

October 30, 2014 02:02 PM
Everyone already knows there are privacy issues related to social media and new technologies. Non-tech-oriented friends and family members often ask me questions about whether they should avoid Facebook messenger or flashlight apps. Or whether it's OK to use credit cards online in spite of recent breach headlines. The mainstream media writes articles about leaked personal photos and the Snappening. So, it's out there. We all know. We know there are bad people out there who will attempt to hack their way into our personal data. But, that's only a small part of the story.

For those who haven't quite realized it, there's no such thing as a free service. Businesses exist to generate returns on investment capital. Some have said about Social Media, "if you can't tell what the product is, it's probably you." To be fair, most of us are aware that Facebook and Twitter will monetize via advertising of some kind. And yes, it may be personalized based on what we like or retweet. But, I'm not sure we fully understand the extent to which this personal, potentially sensitive, information is being productized.

Here are a few examples of what I mean:

Advanced Profiling

I recently viewed a product marketing video targeted to communications service providers. It describes that massive adoption of mobile devices and broadband connections suggesting that by next year there will be 7.7 billion mobile phones in use with 15 billion connections globally. And that "All of these systems produce an amazing amount of customer data" to the tune of 40TB per day; only 3% of which is transformed into revenue. The rest isn't monetized. (Gasp!) The pitch is that by better profiling customers, telcos can improve their ability to monetize that data. The thing that struck me was the extent of the profiling.

As seen in the screen capture, the user profile presented extends beyond the telco services acquired or service usage patterns into the detailed information that flows through the system. The telco builds a very personal profile using information such as favorite sports teams, life events, contacts, location, favorite apps, etc. And we should assume that favorite sports team could easily be religious beliefs, political affiliations, or sexual interests.

IBM and Twitter

On October 29, IBM and Twitter announced a new relationship that enables enterprises to "incorporate Twitter data into their decision-making." In the announcement, Twitter describes itself as "an enormous public archive of human thought that captures the ideas, opinions and debates taking place around the world on almost any topic at any moment in time." And now all of those thoughts, ideas, and opinions are available for purchase through a partnership with IBM.

I'm not knocking Twitter or IBM. The technology behind these capabilities is fascinating and impressive. And perhaps Twitter users allow their data to be used in these ways by accepting the Terms of Use. But, it feels a lot more invasive to essentially provide any third party with a siphon into the massive data that is our Twitter accounts than it would be to, for example, insert a sponsored tweet into my feed that may be selected based on which accounts I follow or keywords I've tweeted.

Instagram Users and Facebook

I recently opened Facebook to see an updated list of People I may know. Most Facebook users are familiar with the feature. It can be an easy way to locate old friends or people who recently joined the network. But something was different. The list was heavily comprised of people who I sort of recognize but have never known personally.

I realized that Facebook was trying to connect me with many of the people behind the accounts I follow on Instagram. Many of these people don't use their real names, talk about their work, or discuss personal family matters on Instagram. They're photographers sharing photos. Essentially, they're artists sharing their art with anyone who wants to take a look. And it feels like a safe way to share.

But now I'm looking at a profile of someone I knew previously only as "Ty_Chi the landscape photographer" and I can now see that he is actually Tyson Kendrick, retail manager from Chicago, father of three girls and a boy. Facebook is telling me more than Mr. Kendrick wanted to share. And I'm looking at Richard Thompson, who's a marketing specialist for one of the brands I follow. I guess Facebook knows the real people behind brand accounts too. It started feeling pretty creepy.

What does it all mean?

Monetization of social media goes way beyond targeted advertising. Businesses are reaching deep into any available data to make connections or discover insights that produce better returns. Service providers and social media platforms may share customer details with each other or with third parties to improve their own bottom lines. And the more creative they get, the more our sense of privacy erodes.

What I've outlined here extends only slightly beyond what I think most people expect. But, we should collectively consider how far this will all go. If companies will make major financial decisions based on Twitter user activity, will there be well-funded campaigns to change user behavior on Social Media platforms? Will the free-flow exchange of ideas and opinions become more heavily and intentionally influenced?

The sharing/exchanging of users' personal data is becoming institutionalized. It's not a corner case of hackers breaking in. It's a systemic business practice that will grow, evolve, and expand.

I have no recipe to avoid what's coming. I have no suggestions for users looking to hold onto to the last threads of their privacy. I just think it's worth thinking critically about how our data may be used and what that may mean for us in years to come.

Vittorio Bertocci - MicrosoftADAL .NET v3 Preview: PCL, Xamarin iOS and Android Support [Technorati links]

October 30, 2014 11:00 AM

It’s again preview time for ADAL .NET! I can’t believe we are already working on version 3 – this pace would have been unthinkable just few years back, and yet today it is barely enough to keep up with the new possibilities that the tech landscape offers.

Version 1 and 2 have been enjoying great adoption level. While I am happy to report that overall it would appear you guys are very happy with it, of course we still have significant margin for improvement. This first preview does not address all the fundamental areas we want to work on (hint, hint: expect breaking changes before we’ll get to GA) but does tackle one of the greatest hits in the feature requests: the ability to use ADAL with PCL technology, and in particular with Xamarin.
If you are not familiar with those, here there’s how to wrap your head around what we are aiming at here.

Say that you are working on a native application that makes use of some Microsoft API (Office 365 Exchange and/or SharePoint, Directory Graph, Azure management, etc) and/or any API protected by AD. Say that you want versions of your application for .NET desktop, iOS, Android, Windows Store and Windows Phone 8.1.
With ADAL .NET v3 you will be able to write (in C#) a single class library containing all the authentication code and API calls. Done that, you will simply reference that class library and the ADAL .NET v3 NuGet from all your platform specific projects. Bam. Now you’ll just need to code the UX for every project, as the authentication code will simply be reused across all platforms.

Is your mind blown yet? Mine still is! Every time I see the debugger hit the exact same breakpoint regardless of the platform I am working with, I find myself in a marveling stupor Smile

This post will give you a quick introduction to the new approach this preview pursues. I won’t go too deep into details, mostly because those are super early bits: you should expect rougher edges than you’ve experienced in our past previews, as we really want to have an opportunity to hear from you before we start chiseling. And now, without further ado…

The New Architecture

ADAL v3 maintains the usual primitives that (at least, that’s what you tell us Smile) you already know and love: AuthenticationContext, AcquireToken, AuthenticationResult. From the code perspective, ADAL v3 preview 1 will hold very few surprises.
What changed is how the library itself is factored. At this point the library contains the following:

On each platform, the full ADAL set of features is an emerging property of the combination of ADAL.PCL and ADAL.PCL.<Platform>. Oversimplifying a bit, the experience goes as follows:

  1. Say that your project is a Xamarin classic API iOS one. You add a reference to the ADAL v3 NuGet from your project.  That adds in your references both ADAL.PCL (Microsoft.IdentityModel.Clients.ActiveDirectory) and ADAL.PCL.iOS (Microsoft.IdentityModel.Clients.ActiveDirectory.Platform). The only exception is if your project is itself a PCL, in which case you’ll only get ADAL.PCL.
  2. Code normally against the usual OM, which comes from ADAL.PCL.
  3. When you run your project, at runtime ADAL executes the token request logic and, once it gets to the point in which it needs to leverage a platform specific component, it injects the logic from the platform specific assembly to perform the task requested: pop out a browser surface, dispatch a message to the app with the results, and so on.

How this is even possible is, in my opinion, a thing of beauty. We have Afshin Sepehri to thank for this neat trick, and in fact for the entire architecture!
Here there’s how it works. You will notice that AcquireTokenAsync accepts a new parameter, of type IAuthorizationParameter. This parameter passes to ADAL a reference to the parent component that hosts the logic requesting a token. Every platform will pass a different type – on the base of that, ADAL can load the correct assembly and inject it as a dynamic dependency. Neat-o!

As I mentioned in the intro to the post, if you are targeting more than one platform at once you can carry this approach further: instead of calling ADAL APIs directly from the platform specific project, you can gather all of that in your own shared PCL that exposes only business logic that makes sense for your app and none of the token acquisition logic. Remember, you’ll still need to reference the ADAL NuGet from every platform specific project for the aforementioned trick to work – but this will be able to have a 100% shared component across all projects.

To make this more practical, let me refer to our new sample here.

The sample represents a super simply Directory Graph API client, which you can use for looking up a user by its email alias.

The solution follows the structure we discussed, one PCL and many platform-specific projects:


The PCL holds a reference to ADAL.PCL:


As it is a very simple scenario, the PCL exposes a single method:

// …
public static async Task<List<User>> SearchByAlias(string alias, IAuthorizationParameters parent) {
    AuthenticationResult authResult = null;
    JObject jResult = null;
    List<User> results = new List<User>();

        AuthenticationContext authContext = new AuthenticationContext(commonAuthority);
        if (authContext.TokenCache.ReadItems().Count() > 0)
            authContext = new AuthenticationContext(authContext.TokenCache.ReadItems().First().Authority);
        authResult = await authContext.AcquireTokenAsync(graphResourceUri, clientId, returnUri, parent);
// …

The code looks up the cache to see if there is already an entry, in which case we’ll use its tenant as authority, otherwise falls back to common. Apart from the parent parameter, it’s business as usual.

To see how this is used, let’s pick a random project: say the iOS one.


Here the references include our own PCL, references to both ADAL.PCL and ADAL.PCL.iOS, and the usual Xamarin stuff.

All the interesting logic is in DirSearchClient_iOSViewController:

public override void ViewDidLoad()

    // Perform any additional setup after loading the view, typically from a nib.
    SearchButton.TouchUpInside += async (object sender, EventArgs e) =>
        if (string.IsNullOrEmpty(SearchTermText.Text))
            // UX stuff cut for clarity

        List<User> results = await DirectorySearcher.SearchByAlias(SearchTermText.Text, new AuthorizationParameters(this));
        if (results.Count == 0) 
            // ...more stuff

The interesting bit is the highlighted. That’s the only code you need to have in the iOS project to take advantage of the dir searcher features, you don’t even need to know that ADAL is in there! Smile

Running the projects in the solution is a pretty amazing experience: an incredible reach with so little code… and in my favorite language, c#!!!


In fact, there is something I find even more mind blowing. You can actually take the solution, copy it to a Mac, open it with Xamarni Studio, and you’ll be able to edit and debug the iOS and Android solutions directly from there! See screenshot on my Mac below. I am sure that if you worked a lot with Xamarin you might be blasé about this, but being still new to it… this is awesome Open-mouthed smile

Screen Shot 2014-10-25 at 1.01.57 AM


Did I mention this is rougher than our usual previews? This is meant to unblock a very specific scenario, use of interactive authentication in multi-platform solutions, so that we can gather early feedback and course-correct as appropriate. Everything outside of that scenario might fail, expect NotImplemented to pop up often if you delve from that path. Important features like persistent cache or silent token auth are missing. The OM *WILL* change, hence if you write code against it now please know that the next preview will very likely break it.

We also have limitations on the target platforms : ADA can’t target Xamarin Forms or the new project types yet, it does not work yet with ASP.NET vNext, and so on. We’ll iterate and make things better, but you need to expect an interesting ride Smile

Also, very important: this does not change our existing strategy for native libraries. ADAL .NET v3 will extend the scope of what a .NET developer can do, but this does not change at all our commitment to support developers who work natively on iOS and Android with Objective-C and Java. In fact, parity is not even a goal here. All the libraries share our general approach (Open source, rapid iterations, etc) but that’s about it. If you are an Objective-C or a Java developer don’t worry, we still got you covered! Smile


I hope you’re as excited as I am about this new development in ADAL-land. A good starting point is this sample. I am planning to show this at my TechEd EU session today, hence shortly we should also have a video. The more feedback you give us, the faster we’ll make progress… I can’t wait to read your comments Smile

Happy coding!

Kuppinger Cole25.11.2014: From Privacy Impact Assessments (PIA) to Information Risk Assessments [Technorati links]

October 30, 2014 09:20 AM
In KuppingerCole

Privacy Impact Assessments (PIAs) are already or soon will be a legal requirement in many jurisdictions or sectors (i.e. payment cards sector). They provide a great help for institutions to focus on privacy and data flows and therefore provide an interesting entry point into an overall discussion on Information and identity-related risks. In this webinar, KuppingerCole´s analysts will discuss with you about PIAs as a natural starting point for a broader and more complete view on digital risks.
October 29, 2014

Kuppinger Cole04.12.2014: Identity & Access Management als Fundament für das Digitale Business [Technorati links]

October 29, 2014 07:57 AM
In KuppingerCole

Das Digitalzeitalter, die Verschmelzung der digitalen mit der „wirklichen“, der analogen Welt, verändert unser Geschäft grundlegend und irreversibel. Bestehende Geschäftsmodelle an die neuen Anforderungen anzupassen und neue Chancen wirksam und effizient zu nutzen, ist die große Herausforderung dieser Transformation und unserer Zeit. Plötzlich ist die IT überall und Bestandteil aller Ebenen der Wertschöpfung. Alle Beziehungen eines Unternehmens, insbesondere aber die zu Kunden und...
October 28, 2014

Paul MadsenLess is more [Technorati links]

October 28, 2014 08:39 PM
I attended GigaOM's Structure Connect conference in San Francisco last week. The event was great, lots of interesting discussions & panels.

I was in a 'Securing the IoT' breakout session where one of the GigaOM analysts made the assertion (paraphrasing)
Developers need better training on security, they need to take more responsibility for securing their applications.
This actually runs completely counter to what I've been seeing as the overarching trend around application security, namely that developers need to take (or even be given) less responsibility for securing their applications - not more.

If their app has to handle $$, do developers directly track currency exchange rates? No, they find an API that does that and so removes them from a task secondary to that of the application itself. The currency API abstracts away from the developer all the messiness - they make a simple REST call and get back a tidy bit of JSON to parse & use.

From the developers point of view, why would security be different? Do they want to deal with the specific details of supporting different authentication protocols, crypto etc. Or would they prefer to focus on adding features and functionality to their apps?

The trend towards lightening the security load for developers manifests in various ways

The Facebook SDK for iOS provides various login experiences that your app can use to authenticate someone. This document includes all the information you need to know in order to implement Facebook login in your iOS app.
Google has the comparable the Google+ Sign-In, the documentation for which asserts
Avoid the hassle of creating your own authentication systemGoogle+ Sign-In mitigates data-breach risks and reduces the burden and costs of identity security. The Google authentication window creates a trusted link between you, your users, and their Google account.

The above are all examples of freeing application developers from having to bear full responsibility for securing APIs & native applications. And last I checked, both will be relevant for the Internet of Things. Freed from the burden of security, IoT developers will be able to focus their attention where they should - namely creating new & interesting visual paradigms for my wearable step data.

Nishant Kaushik - OracleThe SCUID has a new home. At CA Technologies [Technorati links]

October 28, 2014 05:22 PM

Identity is the key to a secure, agile, cloud-based world. Which means that managing and using identities has to be easy, seamless, inherent, cost-effective. Enabling that was the mission when I joined Identropy to build what would become SCUID. We believed that the future of identity management lay in the cloud, and required a fundamental rethink of the business of managing identities. At SCUID, we’ve been working hard to realize our vision of building the control point that the modern enterprise needs. It’s been an interesting journey so far, with some interesting twists and turns along the way that helped shape and reshape our plan. And our path just took one more, pretty interesting, turn.

CA Technologies believes in a future where every business is being rewritten by software. Identity Management is no different. The folks in CA’s Security team saw in SCUID the chance to bring onboard an innovative solution and an amazing team to help accelerate their Secure Cloud offering into a future where identity has no boundaries, is easy-to-use, and enables secure business transformation. And they took that chance. So SCUID is now part of the CA family, and we’re raring to unleash the kraken.

Stay tuned. It’s gonna be an interesting ride.

The post The SCUID has a new home. At CA Technologies appeared first on Talking Identity | Nishant Kaushik's Look at the World of Identity Management.

Vittorio Bertocci - MicrosoftADAL JavaScript and AngularJS – Deep Dive [Technorati links]

October 28, 2014 03:00 PM

Many web apps are structured as “single page apps”, or SPA: they have a JavaScript-heavy frontend and a Web API backend. Notable examples: Outlook Web App, Gmail.

Properly securing SPA’s traffic between its JS frontend and its Web API backend requires an OAuth2 flow, the implicit grant, that Azure AD did not expose… until today’s developer preview.

Today we are turning on the ability for you to enable a preview of our OAuth2 implicit grant support on any web app you choose.
At the same time, we are releasing the developer preview of a JavaScript flavor of ADAL, which will make it super easy for you to take advantage of Azure AD for handling sign in in your AngularJS based SPAs.

Last week I visited Cory on his Web Camp TV show, where I had the opportunity of introducing ADAL JS and demonstrating it in action; you can head to the recording if you want a quick overview.

In this post I will dig a bit deeper in the library and its basic usage. Remember, this is a preview and nothing is set in stone. We really want your feedback on this one – please let us know what you like and what you want us to do differently!

That said, without further ado…

The Basics

As usual, using ADAL JS does not require any protocol knowledge. In fact, operations here are even easier (though more tightly scoped) than with any other ADAL flavor. You just need to add a reference to the library, initialize it with the coordinates of your app, specify which routes should be protected by Azure AD – and you’re good to go.

Let’s flesh out the details. I will use our basic SPA sample as a reference.

Set up the OAuth2 Implicit grant for your test app

By default, applications provisioned in Azure AD are not enabled to use the OAuth2 implicit grant. In order to try the OAuth2 implicit grant preview, you need to explicitly opt in for each app you want to experiment with. In the current developer preview the process unfolds as in the following.

  1. Navigate to the the Azure management portal. Go to your directory, head to the Applications tab, and select the app you want to enable.
  2. Using the Manage Manifest button in the drawer, download the manifest file for the application and save it to disk.
  3. Open the manifest file with a text editor. Search for the oauth2AllowImplicitFlow property. You will find that it is set to false; change it to true and save the file.
  4. Using the Manage Manifest button, upload the updated manifest file. Save the configuration of the app

Add a reference to the library

As it is customary for J S libraries, you can either include the file (adal.js) in your local Scripts library, or you can pull it in via CDN (for this preview, the minified version lives at https://secure.aadcdn.microsoftonline-p.com/lib/0.0.1/js/adal.min.js). Either way, you’ll want to pull adal.js in your main html page:

<script src="App/Scripts/adal.js"></script>


The rest all takes place in your main app.js file, where you’d normally configure your route. In our sample we use the OOB ngRoute, but in fact you don’t have to stick to it.

First, we need to include the ADAL module:

var app = angular.module('TodoSPA', ['ngRoute','AdalAngular']);

We also need to make sure that we pass the corresponding provider to the app.config:

app.config(['$routeProvider', '$httpProvider', 'adalAuthenticationServiceProvider',
 function ($routeProvider, $httpProvider, adalAuthenticationServiceProvider) {

Then, as we define the routes as usual, we can specify if there are some that we want to protect with Azure AD:

$routeProvider.when("/Home", {
controller: "HomeController",
templateUrl: "/App/Views/Home.html",
}).when("/TodoList", {
controller: "TodoListController",
templateUrl: "/App/Views/TodoList.html",
requireADLogin: true,

There are other ways of kicking off the authentication dance, but this is the one with the least amout of code involved.

Finally, the initialization proper. We have to pass to ADAL the coordinates that describe our app in Azure AD:

tenant: 'contoso.onmicrosoft.com',
clientId: 'e9a5a8b6-8af7-4719-9821-0deef255f68e'

On the client, that’s all you have to do. For what concerns the API backed, you just configure it in the same way in which you configure Web API with Azure AD authentication: with the OAuth bearer OWIN middleware. The only difference is that in the audience you’ll need to use the clientid of the app, the same value you just passed in Init() above.


Hit F5. You’ll land on the home view. Click on Todo list: you’ll be bounced to the familiar Azure AD authentication screen. Enter your credentials, and you’ll end up on the screen below.


♪Ta dah!♪ Barely 3 lines of JavaScript strategically placed, and your SPA gets to partake on one of the biggest identity systems on the planet.

Deep Dive

Well, that was pretty smooth. Aren’t you curious about how that happened?

Protocol Flows

Traditional web apps reply on roundtrips both for executing business logic and for driving the user experience. The classic way of securing those entail an initial leg where the user is shipped elsewhere via a full redirect, typically to the authority that the app trusts as identity provider. This initial leg results in a token being sent to the app. The app (or usually some form of middleware sitting in front of it) will validate the token and, upon successful verification, will issue to itself a session cookie. The session cookie will be present in all subsequent requests: the app will interpret its presence (and validity) as a sign that the request comes from an authenticated user. Rinse and repeat until the cookie expires. And once it does, the middleware will simply redirect the browser to the authority again – and the dance will continue.

SPAs don’t work too well with the same model. The cookie can be used for securing calls to your own backed, and in fact lots of people do so; but this does not work if you want to call API on a different backend (cookies can only go to their own domain). Also, web API calls aren’t too conducive of renewing cookies – when some JavaScript is trying to get data for populating an individual DIV and the session cookie it uses is expired, getting back a 302 isn’t the most actionable thing. Finally, having to support cookies on your Web API is just odd: that would not help at all when the same API are invoked by a native client, forcing you to maintain multiple validation stacks and ensure that they don’t step on each others’ toes. ADAL JS does not use cookies, or at least not directly.

Signing In

Signing in is kind of a misnomer here, but only if you know what’s really going on under the cover – and most don’t need to. Rather than forcing everybody to grok what happen in the name of accuracy, we just went with the flow J

Signing in for ADAL JS actually means obtaining a token for your app’s backend API. As simple as that.

Given that from Azure AD’s perspective your JS frontend and your web API backend are actually different manifestations of the same app, you will be asking a token for yourself: which is exactly the semantic for an id token. We already do that in the OpenID Connect middleware, however there our goal is to deliver the token to the server side of the web app. Here we want the JavaScript frontend to obtain the token, so that it can later use it whenever it needs to invoke its own Web API.

Here there’s how the request looks like:

GET https://login.windows.net/developertenant.onmicrosoft.com/oauth2/authorize?response_type=id_token&client_id=aeadda0b-4350-4668-a457-359c60427122&redirect_uri=https%3A%2F%2Flocalhost%3A44326%2F&state=8f0f4eff-360f-4c50-acf0-99cf8174a58b&nonce=8b8385b9-26d3-42a1-a506-a8162bc8dc63 HTTP/1.1

Notice that we are asking for an id_token, and that we do inject state and nonce for the usual security reasons.

The response is the interesting part:

HTTP/1.1 302 Found
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: text/html; charset=utf-8
Expires: -1

Location: https://localhost:44326/#id_token=eyJ0eXAiOiJKV1QiLC[SNIP]gu1OnSTN2Q2NVu3ug&state=8f0f4eff-360f-4c50-acf0-99cf8174a58b&session_state=e4ec5227-3676-40bf-bdfe-454de9a2fdb2

Server: Microsoft-IIS/8.0

The id_token is delivered as a fragment, so that only the client is able to access it.

Now that we have the token on the client we can store it and retrieve it whenever we are about to send a request to our backed API. We’ll detail how ADAL JS does it a little later.

Calling APIs

Once you have a token, on the wire a call to an API looks exactly the same as what you’d see from any other client type: you’ll find an authorization header containing a bearer token, just like OAuth2 prescribes.

In this particular case, however, we know a lot about the circumstances in which the call will be made. Whereas a native client can use all sorts of classes to compose the request, here we know it’s going to use what the browser (and JS in general) makes available. Thanks to that knowledge, ADAL JS can go further than any other ADAL flavor and automatically attach the right token whenever you make an API call. In fact, it can even renew it or request it ex novo contextually to the call! That’s why in the sample you don’t see any code referencing AuthenticationContext, AcquireToken and the like: those primitives are there but unless you want to do something custom you don’t actually need to use them Smile

Keeping the session alive


One of the other shortcomings of cookie based sessions lies in the fact that renewing them is kind of a pain, which leads devs to decouple the session validity from the validity of the token itself. That typically delivers a smoother experience (popping a redirect once per hour isn’t fun for the user) but makes it much harder to revoke a session when something goes south.

In this flow we are not working with cookies, but tokens. In native applications, tokens are refreshed using refresh tokens. However here we run in a user agent, hence delivering something as autonomous as a refresh token is a dangerous proposition; not an option either.
We do have another option to renew tokens without perturbing the user experience AND without introducing hard to revoke session artifacts. The trick is to go back to the authority asking for a token, like we’d do in the roundtrip apps scenario, but doing so in a hidden iframe. If there is still an existing session with the authority (which might be represented by a cookie – but it is a cookie in the domain of the authority, NOT the app’s) we will be able to get a new token without any UX. There is even a specific parameter, prompt=none, which lets Azure AD know that we want to get the token without a UX, and if it can’t be done we want to get an error back. Here there’s the request.

GET https://login.windows.net/52d4b072-9470-49fb-8721-bc3a1c9912a1/oauth2/authorize?response_type=id_token&client_id=e9a5a8b6-8af7-4719-9821-0deef255f68e&redirect_uri=http%3A%2F%2Flocalhost%3A49724%2F&state=064f47a8-da8d-45bc-bc58-78469dfa7434%7Ce9a5a8b6-8af7-4719-9821-0deef255f68e&prompt=none&login_hint=mario%40developertenant.onmicrosoft.com&
Host: login.windows-ppe.net

If the session is gone (maybe somebody signed out) then we will not be able to extend the session further without user interaction, but that will be because that’s what the authority wants and not for any limitation of the flow. Pretty neat, eh? Smile ADAL JS does all this behind the scenes for you. More details later.

Getting Tokens for Other Services

The trick described above can do much more than renewing the tokens for your own Web API backend; it can be used for obtaining tokens for other Web API, too. However this time we will be requesting regular access tokens, and we’ll have to specify the resourceID of the API we want. Furthermore, those API will need to support CORS.

Signing out

Signing out is pretty much as usual. Note that we don’t need to let our backend know that we’re signing ouit, vigen that all session artifacts are under the JS frontend’s control.

GET https://login.windows.net/developertenant.onmicrosoft.com/oauth2/logout?post_logout_redirect_uri=https%3A%2F%2Flocalhost%3A44326%2F HTTP/1.1


Now that we understand the basic flows behind ADAL JS, we can dig a bit deeper in how the library makes all that happen.

ADAL is exposed to your apps as an AngularJS module, “adalAngular”.
It does not have to be that way, and in fact you could use the lower layer in any other JS stack if you’d so choose, but for this preview if you use AngularJS you’ll have the best experience.

All of the interesting logic takes place in a service, “adalAuthenticationService”, which surfaces selected primitives to be used at init time and from the app’s controllers. The main methods and properties you’ll work with are:

Init(configOptions, httpProvider)

This method initializes the entire ADAL JS engine. You have seen it in action in the sample with as configOptions the only two mandatory parameters:

Tenant – tenantID or domain of the target Azure AD tenant .

clientId – client ID of the application.

The values passed in configOptions end up being exposed afterwards in the config property.

There are many more optional parameters you can use for modifying the default behavior.

redirectUri – in case you want to use anything different from the root path of the application when getting back tokens.

postLogoutRedirectUri– in case you want to use anything different from the root path of the application when signing out.

localLoginUrl – this parameter allows you to specify a specific Url to be used in lieu of the default redirect to Azure AD that would take place when an unauthenticated user attempts to access a protected route. You can use that page for creating any auth experience you like, such as for example something that gives users a choice between authenticating with Azure AD (see below n how to implement that option) or with a local provider.

Instance – by default, ADAL JS addresses requests to login.windows.net. If you want to override that, you can enter the desired hostname here.

expireOffsetSeconds – this value is used to determine with much advance an access token should be considered expired. Every time ADAL fetches a token from the cache, before it it assesses whether the token is less than this value (the default is 120 secs) from expiring. If it is, ADAL triggers a renew flow before performing the call.

extraQueryParameter – this has the same function as its homonym in the other ADALs, it allows you to pass any extra value you want to send to Azure AD in the querystring of authorization requests.

Endpoints – this is a very important parameter. ADAL can always tell what token to use when you’re calling your own backend, but without this property it would have no clue on how to handle other API. Endpoints is an array of couples (URL, resourceID). Every time a call to an API is made, ADAL’s interceptor looks up the requested URL in endpoint, then searches the cache for a token with a corresponding resourceID. If it finds it, it uses it; otherwise, it requests a token for that resource, caches it and then uses it. Very handy!


This property holds the main info about the currently signed in user and signed in status. Sub-properties:

isAuthenticated – Boolean indicating whether there is a currently signed in user.

userName – UPN or email of the currently signed in user

profile – the claims found in the id token, decoded and exposed as properties.

loginError – login erros, if any.

login and logOut

Those methods can be used for driving login and logout directly, as opposed to side effects of requesting a protected route. They are pretty easy to use, all you need to do is set them up in the controller backing the view where you want to place your sign in/out gestures:

app.controller('HomeController', ['$scope', 'adalAuthenticationService','$location',
function ($scope, adalAuthenticationService, $location) {
$scope.Login = function () {
$scope.Logout = function () {

ADAL JS also offers you the opportunity of handling events associated to the login outcome. That’s super important for error handling. For example:

  $scope.$on("adal:loginSuccess", function () {
        $scope.testMessage = "loginSuccess";

    $scope.$on("adal:loginFailure", function () {
        $scope.testMessage = "loginFailure";



You can take a look at the cache logic by inspecting the code. As mentioned, ADAL JS cache is based on localStorage. It allows for interesting feats, like the ability of staying signed in even if you close the browser. Below you can see a screenshot of ADAL’s cache content (and what ADA JS keeps in localStorage in general).

VERY IMPORTANT. The use of localStorage does have security implications, given that other apps in the same domain will have access to it (kind of what happens with cookies in the default case) and it is prone to all the same attacks that localStorage have to deal with. Please exercise caution when using this feature and ensure you do all the due diligence you’d normally do for protecting data in localStorage. Also, remember: this is a developer preview. That means that it should not used for anything but experimenting, given that we are still just gathering feedback… please don’t use those bits with anything critical Smile



Kudos to Omer Cansizoglu, the awesome developer who put together ADAL JS! Also, thanks to Mads Kristensen and Damian Edwards for their invaluable feedback.

Those are very early bits, but we know that this is a scenario of great interest to you – we wanted to put the bits in your hands as son as possible, so that you experiment and let us know what to do. We are looking forward for your feedback!

Ludovic Poitou - ForgeRock2014 European IRM Summit in only a few days away ! [Technorati links]

October 28, 2014 01:42 PM

Starting Monday next week, at the Powerscourt Estate near Dublin, the European IRM Summit is just a few days away.

I’m polishing the content and demos for the 2 sessions that I’m presenting, one for each product that I’m managing: OpenDJ and OpenIG. Both take place on the Wednesday afternoon in the Technology Overview track.

If you’re still contemplating whether you should attend the event, check the finalised agenda. And hurry up to the Registration site ! I’m told there are a few remaining seats available, but they might not last for long!

I looking forward to seeing everyone next week in Ireland.


Filed under: General Tagged: conference, Dublin, ForgeRock, identity, Ireland, IRM, IRMSummit, IRMSummit2014, IRMSummitEurope, opendj, openig, summit
October 27, 2014

KatasoftA Simple Web App with Node.js, Express, Bootstrap & Stormpath -- in 15 Minutes [Technorati links]

October 27, 2014 06:00 PM

Here at Stormpath we <heart> Node.js – it’s so much fun to build with! We’re built several libraries to help node developers achieve user management nirvana in your applications.

If you’ve built a web app before, you know that all the “user stuff” is a royal pain. Stormpath gives [developers all that “user stuff” out-of-the-box] (https://docs.stormpath.com/nodejs/express/) so you can get on with what you really care about – your app! By the time you’re done with this tutorial ( < 15 minutes, I promise), you’ll have a fully-working Express app.

We will focus on our Express-Stormpath library to roll out a simple Express.js web application, with a complete user registration and login system, with these features:

In this demo we will be using Express 4.0, we’ll discuss some of the great features of Express 4.0 as we go along. I will be using my Mac, the Terminal app, and Sublime Text for a text editor.

What is Stormpath?

Stormpath is an API service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

In short: we make user account management a lot easier, more secure, and more scalable than what you’re probably used to.

Ready to get started? Register for a free developer account at https://api.stormpath.com/register

Start your project

Got your Stormpath developer account? Great! Let’s get started.. vroom vroom

If you don’t already have Node.js on your system you should head over to https://nodejs.org and install it on your computer. In our examples we will be using a Mac, all commands you see should be entered in your Terminal (without the $ in front – that’s a symbol to let you know that these are terminal commands)

Step one is to create a folder for this project and change into that directory:

$ mkdir my-webapp
$ cd my-webapp

Now that we are in the folder we will want to create a package.json file for this project. This file is used by Node.js to keep track of what libraries (aka modules) your project depends on. To create the file:

$ npm init

You will be asked a series of questions, for most of them you can just press enter to allow the default value to be used. Here is what I chose, I decided to call my main file app.js, I set my own description and set the license to MIT – everything else I just pressed enter on:

Press ^C at any time to quit.
name: (my-webapp)
version: (0.0.0)
description: Website for my new app
entry point: (index.js) app.js
test command:
git repository:
license: (ISC) MIT
About to write to /private/tmp/my-webapp/package.json:

  "name": "my-webapp",
  "version": "0.0.0",
  "description": "Website for my new app",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "",
  "license": "MIT"

Is this ok? (yes) yes

With that I will now have a package.json file in my folder. I can take a look at what’s in it:

$ cat package.json

  "name": "my-webapp",
  "version": "0.0.0",
  "description": "Website for my new app",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "",
  "license": "MIT"

Looks good! Now let’s install the libraries we want to use. You can install them all with this command:

$ npm i --save express express-stormpath jade forms csurf xtend

The save option will add this module to your dependencies in package.json. Here is what each module does:

Gather your API Credentials and Application Href

The connection between your app and Stormpath is secured with “API Key Pair”. You will provide these keys to your web app and it will use them when it communicates with Stormpath. You can download your API key pair in our Admin Console. After you login you can download your API key pair from the home page, it will download the apiKey.properties file – we will use this in a moment.

While you are in the Admin Console you want to get the href for your default Stormpath Application. In Stormpath, an Application object is used to link your web app to your user stores inside Stormpath. All new developer accounts have an app called “My Application”. Click on “Applications” in the Admin Console, then click on “My Application”. On that page you will see the Href for the Application. Copy this — we will need it shortly.

Writing the application entry (app.js)

It’s time to create app.js, this will be the entry point for your server application. You can do that from Sublime Text or you can do this in the terminal:

$ touch app.js

Now open that file in Sublime Text and put the following block of code in it:

var express = require('express');
var stormpath = require('express-stormpath');

var app = express();

app.set('views', './views');
app.set('view engine', 'jade');

var stormpathMiddleware = stormpath.init(app, {
  apiKeyFile: '/Users/robert/.stormpath/apiKey.properties',
  application: 'https://api.stormpath.com/v1/applications/xxx',
  secretKey: 'some_long_random_string',
  expandCustomData: true,
  enableForgotPassword: true


app.get('/', function(req, res) {
  res.render('home', {
    title: 'Welcome'


You’ll need to set the location of apiKeyFile to be the location on your computer where you saved the file. You also need to set the application href to be the one that you looked up earlier.

The secretKey value should be changed as well, this will be used as a key for encrypting any cookies that are set by this webapp. I suggest using a long, random string of any type of character.

In this example we’ve enabled some stormpath-specific options, they are:

There are many more options that can be passed, and we won’t cover all of them in this demo. Please seee the Express-Stormpath Documentation for a full list

Create your home page

Let’s get the easy stuff out of the way: your home page. Create a views directory and then create a Jade file for the home page:

$ mkdir views
$ touch views/home.jade

Now open that file in Sublime Text and put the following in it:

    link(href='//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css', rel='stylesheet')
        h1 Hello!

        if user
          p Welcome, #{user.fullName}
            a.small(href="profile") Edit my profile
          a.btn.btn-primary(href="/logout") Logout
          p Welcome to my app, ready to get started?
            a.btn.btn-primary(href="/login") Login now
            span.small Don't have an account?
            span &nbsp;
            a.small(href="/register") Register now

This is a simple view that will prompt a new visitor to log in, or greet a registered user if they have already logged in.

With that… we’ve got something we can look at!

Run the server – It’s Aliiiive!

I kid you not: your application is ready to be used. Just run this command to start the server:

$ node app.js

This will start your app which is now running as a web server on your computer. You can now open this link in your browser:


You should see your home page now:

Node App Home

Go ahead, try it out! Create an account, you will be redirected back to the home page and shown your name. Then logout and login again, same thing! Pretty amazing, right??

Pro tip: use a file watcher

As we move forward we will be editing your server files. You will need to restart the server each time. You can kill the server by typing Ctrl + C in your Terminal. But I suggest using a “watcher” that will do this for you.

I really like the Nodemon tool. You can install it globally (it will always be ready for you!) with this command:

$ npm install -g nodemon

After installation, you can then run this command:

$ nodemon app.js

This will start your server and watch for any file changes. Nodemon will automatically restart your server if you change any files – sweet!

Create the profile page

A common feature of most sites is a “Dashboard” or “profile” page – a place where your visitor provide some essential information.

For example purposes, we’re going to build a profile page that allows you to collect a shipping address from your visitors. We will leverage Custom Data, one of the most powerful features of stormpath

To begin, let’s create a new view for this dashboard:

$ touch views/profile.jade

And a JavaScript file where the route handler will live:

$ touch profile.js

Now we’ve got some copy-and-paste work to do. These two files are pretty big, so we’ll explain them after the paste.

Paste this into profile.js:

var express = require('express');
var forms = require('forms');
var csurf = require('csurf');
var collectFormErrors = require('express-stormpath/lib/helpers').collectFormErrors;
var stormpath = require('express-stormpath');
var extend = require('xtend');

// Declare the schema of our form:

var profileForm = forms.create({
  givenName: forms.fields.string({
    required: true
  surname: forms.fields.string({ required: true }),
  streetAddress: forms.fields.string(),
  city: forms.fields.string(),
  state: forms.fields.string(),
  zip: forms.fields.string()

// A render function that will render our form and
// provide the values of the fields, as well
// as any situation-specific locals

function renderForm(req,res,locals){
  res.render('profile', extend({
    title: 'My Profile',
    csrfToken: req.csrfToken(),
    givenName: req.user.givenName,
    surname: req.user.surname,
    streetAddress: req.user.customData.streetAddress,
    city: req.user.customData.city,
    state: req.user.customData.state,
    zip: req.user.customData.zip

// Export a function which will create the
// router and return it

module.exports = function profile(){

  var router = express.Router();


  // Capture all requests, the form library will negotiate
  // between GET and POST requests

  router.all('/', stormpath.loginRequired, function(req, res) {
      success: function(form){
        // The form library calls this success method if the
        // form is being POSTED and does not have errors

        // The express-stormpath library will populate req.user,
        // all we have to do is set the properties that we care
        // about and then cal save() on the user object:
        req.user.givenName = form.data.givenName;
        req.user.surname = form.data.surname;
        req.user.customData.streetAddress = form.data.streetAddress;
        req.user.customData.city = form.data.city;
        req.user.customData.state = form.data.state;
        req.user.customData.zip = form.data.zip;
              errors: [{
                error: err.userMessage ||
                err.message || String(err)
      error: function(form){
        // The form library calls this method if the form
        // has validation errors.  We will collect the errors
        // and render the form again, showing the errors
        // to the user
          errors: collectFormErrors(form)
      empty: function(){
        // The form library calls this method if the
        // method is GET - thus we just need to render
        // the form

  // This is an error handler for this router

  router.use(function (err, req, res, next) {
    // This handler catches errors for this router
    if (err.code === 'EBADCSRFTOKEN'){
      // The csurf library is telling us that it can't
      // find a valid token on the form
        // session token is invalid or expired.
        // render the form anyways, but tell them what happened
          errors:[{error:'Your form has expired.  Please try again.'}]
        // the user's cookies have been deleted, we dont know
        // their intention is - send them back to the home page
      // Let the parent app handle the error
      return next(err);

  return router;

Paste this into profile.jade:


        h1 My Profile

      if errors
        each error in errors
            span #{error.error}
      if saved
          span Your profile has been saved
      form.login-form.form-horizontal(method='post', role='form')
        input(name='_csrf', type='hidden', value=csrfToken)

          label.col-sm-4 First Name
              placeholder='Your first name',
          label.col-sm-4 Last Name
            input.form-control(placeholder='Your last name',
          label.col-sm-4 Street address
            input.form-control(placeholder='e.g. 123 Sunny Ave',
          label.col-sm-4 City
            input.form-control(placeholder='e.g. City',
          label.col-sm-4 State
            input.form-control(placeholder='e.g. CA',
          label.col-sm-4 ZIP
            input.form-control(placeholder='e.g. 94116',
            button.login.btn.btn-primary(type='submit') Save
          a(href="/") Return to home page

Breaking it down

You’ve just created an Express Router. Saywha? I really like how the Express maintainers have described this:

A router is an isolated instance of middleware and routes.
Routers can be thought of as "mini" applications, capable only
of performing middleware and routing functions. Every express
application has a built-in app router.

… saywha?

In my words: Express 4.0 encourages you to break up your app into “mini apps”. This makes everything much easier to understand and maintain. This is what we’ve done with the profile.js file — we’ve created a “mini app” which handles JUST the details associated with the profile page.

Don’t believe me? Read on.

Plug in your profile page

Because we followed the Router pattern, it’s now this simple to add the profile page to your existing app.js file (put it right above the call to app.listen):


Omg. Yes. YES. You’ve just decoupled the implentation of a route from it’s addressing. Holy grail? Almost. Awesome? Most Def.

Restart your sever and visit /profile, you should see the form now:

Node App Home

Breaking it down – for real

Okay, there’s a LOT more to talk about here. So let me cover the important points:

Wrapping it up

Alas, we’ve reached the end of this tutorial. You now have a web app that can reigster new users and allow them to provide you with a shipping address, pretty sweet right?

Following the profile example you now have everything you need to start building other pages in your application. As you build those pages, I’m sure you’ll want to take advantage of some other great features, such as:

Those are just a few of my favorites, but there is so much more!

Please read the Express-Stormpath Product Guide for details on how to implement all these amazing features — and don’t hesitate to reach out to us!

WE LOVE WEB APPS and we want your user management experience to be 10x better than you ever imagined.

-robert out

CourionIncrease IT Efficiency & Improve Security with Intelligence Enabled IAM [Technorati links]

October 27, 2014 03:15 PM

Access Risk Management Blog | Courion

David PaparelloToo much to do, too few resources.”

This is a phrase that all too frequently comes up in the discussions that I have with IT staff in organizations around the globe. They feel never-ending pressure to improve security and service to the business, but usually with the same or fewer resources. This is a challenge that is especially glaring when trying to marry solid Identity and Access Management practices with current business processes.

For example, a security manager I spoke to at a large health organization was nearly brought to tears as he talked about the need to accurately track an ever-changing user population where the same person might move through multiple roles and through multiple access scenarios in the course of just a week. At another organization, a help desk manager I worked with wrestled daily with an avalanche of access requests from users who had no idea what access to request, and were seeking help from administrators who in turn had no idea what access users actually needed.

What’s often needed in these situations is an IAM program that is centered on incremental progress that can provide some instant relief while also generating the time and resources needed so that the program can subsequently be expanded into a comprehensive solution. The key is to know where to begin, and to aim for quick business value. Those quick wins will help free-up resources by simplifying and automating processes that typically suck-up valuable manpower and time. Each incremental win then makes it easier to maintain momentum and expand user buy-in within the organization.

To get started with an IAM program that supports this kind of continuous improvement, you should first understand your identity and access landscape. By leveraging intelligence, as with Courion’s Access Insight, you can get an immediate evaluation of Microsoft Active Directory, a key system for most organizations. Acess Insight dashboardThe dashboards included with Access Insight highlight potentially urgent security issues as well as IAM processes that may be broken. Access Insight integrates with the Access Assurance Suite or other IAM solutions so you can drill down to fix those broken processes and promptly disable access for terminations and properly manage non-employee access.

Another benefit of getting the big picture view of your identity and access landscape with Access Insight is to better understand who has access to what and to put automated processes in place to refresh that information at least daily. Even the most complex scenarios benefit greatly from putting rules in place that can automatically map access for 70-95% of the workforce. Allowances can be made for exceptions to be handled manually so that no one falls through the cracks.

With this real-time access information available as a foundation, you can then tackle any number of pain points. For example, most often, the onboarding and offboarding processes for user accounts cry out for attention. Offboarding, both planned and unplanned, is generally simple to address with an intelligence-enabled IAM solution such as the Courion Access Assurance Suite, alleviating security and/or audit concerns.

In addition, automating at least basic, birthright access for new hires can be both a quick win and a foundation for continuous improvement. Role-based access can be incrementally added to the new hire process. You can pick and choose where it’s worth investing effort, for example, where job turnover is high, or where access is very similar across a function. Implementing some roles into this process delivers a triple win – providing the right access (better security) at the right time (improved service) and reducing the number of access requests (boost IT efficiency).

Leveraging intelligence, you can start to cut down on the effort required to develop roles. Intelligence solutions such as Access Insight use analytics to attack the mountain of access data available to find those access patterns to suggest appropriate access for a user. Let the computer do the work!

If your help desk is struggling to keep up, there are several ways to alleviate the pressure while also enhancing security and providing better customer service. For example, a streamlined, centralized access request process provides these multiple benefits.

I often remember an IT manager I worked with at a manufacturing company whose request process included 140 different forms! It was a huge improvement when we helped his organization move to a simple, one-stop access request shopping solution that included a full audit trail and built-in approval process.

Access ProvisioningWith an Intelligent-enabled IAM solution such as the Courion Access Assurance Suite, the request process is enhanced, because it provides guidance to the user regarding what to request. This is done via intelligent modeling of user access, which suggests access options for users in similar roles. The Access Assurance Suite also provides ‘guard rails’ against the inadvertent provisioning of inappropriate access because it automatically checks for possible policy violations, such as Segregation of Duty, during the request process.AAS highlighting a SOD violation in 8 4 2nd

As fundamental as it may seem, a self-service password management solution is also of great benefit to users, IT and help desk staff. Password reset calls often account for 25% or more of help desk calls. Shifting those inbound requests to a self-service process will free up IT and help desk time to tackle more high value activities while allowing end users to avoid waiting on a phone to get a password reset.

Last on this list but not last in priority, is the recertification of user access. Access recertification is a best practice and, likely, a legal and audit requirement. With an intelligence-enabled IAM solution in place this effort can begin by assembling data that details ‘who has access to what’. You can then leverage that information to provide a business-friendly recertification process that does not tax IT resources with hours of assembling spreadsheets from a multitude of systems.

While periodic re-certifications are important and necessary, Intelligence also allows you to trigger automated ‘micro-certifications’ based on policies you define. For example, you may create a policy where a user who gets access to highly sensitive data outside the norm kicks off an access recertification process. This type of risk-aware micro-certification reduces the kind of access risk that exists where waiting six months for the next review could be dangerous. This has the added benefit of maintaining compliance continuously, thus expediting the next audit you face.Find access

Clearly, it’s possible to make significant progress in a relatively short time. The key is that these are not Band-Aid solutions, but the bricks that form a solid foundation for building a comprehensive, flexible and risk-aware IAM solution.


October 25, 2014

Anil JohnA C2G Identity Services Overview of Canada [Technorati links]

October 25, 2014 01:30 PM

Canada, as part of securing its Citizen to (Federal) Government Digital Services, is currently taking a dual track approach to externalizing authentication and account management while keeping identity management in-house. This blog post provides a high level technical overview of the components of Canada’s Cyber Authentication Renewal Initiative.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Mike Jones - MicrosoftJOSE -36 and JWT -30 drafts addressing additional IESG review comments [Technorati links]

October 25, 2014 06:28 AM

IETF logoThese JOSE and JWT drafts incorporate resolutions to some previously unresolved IESG comments. The primary change was adding flattened JSON Serialization syntax for the single digital signature/MAC and single recipient cases. See http://tools.ietf.org/html/draft-ietf-jose-json-web-signature-36#appendix-A.7 and http://tools.ietf.org/html/draft-ietf-jose-json-web-encryption-36#appendix-A.5 for examples. See the history entries for details on the few other changes. No breaking changes were made.

The specifications are available at:

HTML formatted versions are available at:

October 24, 2014

Radovan Semančík - nLightHow to Start Up an Open Source Company [Technorati links]

October 24, 2014 01:23 PM

Evolveum is a successful open source company now. We develop open source Identity and Access Management (IAM) software. We have legally established Evolveum in 2011 but the origins of Evolveum date back to mid-2000s. In 2014 we are getting out of the startup stage into a sustainable stage. But it was a long way to get there. I would like to share our experiences and insights in a hope that this will help other is their attempts to establish an open source business.

The basic rules of the early game are these: It all starts with an idea. Of course. You need to figure out something that is not yet there and people need it. That is the easy part. Then you have to prototype it. Even the brightest idea is almost worthless until it is implemented. You have to spend your own time and money to implement the prototype. You cannot expect anyone to commit the time or money to your project until you have a working prototype. So prepare for that - psychologically and financially. You will have to commit your savings, secure a loan or sacrifice your evenings and nights for approx. 6 months. If you spend less than that then the prototype is unlikely to be good enough to impress others. And you cannot have a really successful project without the support of others (I will get to that). Make sure that the prototype has proper open source license from day one, that it is published very early and that it follows open source best practice. Any attempt to twist and bend these rules is likely to backfire when you are at the most vulnerable.

Then comes the interesting part. Now you have two options. The choice that you make now will determine the future of your company for good. Therefore be very careful here. The options are:

Fast growth: Find an investor. This is actually very easy to do if you have a good idea, good prototype and you are a good leader. Investors are hungry for such start-up companies. You have to impress the investor. This is the reason why the prototype has to be good. If you started alone find an angel investor. If you already have a small team find a venture capitalist. They will give you money to grow the company. Quite a lot of money actually. But there is a catch. Or better to say a whole bunch of them. Firstly, the investor will take majority of shares in your company in exchange for the money. You will have to give away the control over the company. Secondly, you will need to bind yourself to the company for several years (this may be as long as 10 years in total sometimes). Which means you cannot leave without losing almost everything. And thirdly and most importantly: you must be able to declare that your company can grow at least ten times as big in less than three years. Which means that your idea must be super-bright ingenious thingy that really everyone desperately needs and it also needs to be cheap to produce, easy to sell and well-timed - which is obviously quite unlikely. Or you must be inflating the bubble. Be prepared that a good part of the investment will be burned to fuel marketing, not technology. You will most likely start paying attention to business issues and there will be no time left to play with the technology any more. Also be prepared that your company is likely to be sold to some mega-corporation if it happens to be successful - with you still inside the company and business handcuffs still on your hands. You will get your money in the end, but you will have almost no control over the company or the product.

Self-funded growth: Find more people like you. Show them the prototype and persuade them to work together. Let these people become your partners. They will get company shares in exchange of their work and/or money that they invest in the company. The financiers have a very fitting description for this kind of investment: FFF which means Friends, Family and Fools. This is the reason for the prototype to be good. You have to persuade people like you to sacrifice an arm and a leg to your project. They have to really believe in it. Use these FFF money to make a product out of your early prototype. This will take at least 1-2 years and there will be almost no income. Therefore prepare the money for this. Once you are past that state the crucial part comes: use your product to generate income. No, not sell the support or subscription or whatever. This is not going to work at this stage. Nobody will pay enough money for the product until it is well known and proven in practice. You have to use the product yourself. You have to eat your own dogfood. You have to capitalize on the benefits that the product brings, not on the product itself. Does your product provide some service? Set up a SaaS and provide the service for money. Sell your professional services and mix in your product as additional benefit. Sell a solution that contains your product. Does your product improve something (performance, efficiency)? Team up with the company that does this "something" and agree on sharing the revenue or savings generated by your product. And so on. You have to bring your own skin to the game. Use the early income to sustain product development. Do not expect any profit yet. Also spend some money and time on marketing. But most of the money still need to go to the technology. If the product works well then it will eventually attract attention. And then, only then, you will get enough money from subscriptions to fund the development and make profit. Be prepared that it can take 3-6 years to get to this stage. And a couple more years to repay your initial investment. This is a slow and patient business. In the end you will retain your control (or significant influence) over the product and company. But it is unlikely to ever make you a billionaire. Yet, it can make a decent living for you and your partners.

Theoretically there is also a middle way. But that depends on a reasonable investor. An investor that cares much more about the technology than he cares about money, valuations and market trends. And it this is extremely rare breed. You can also try crowdfunding. But this seems to work well only for popular consumer products that are not very common in the open source world. Therefore it looks like your practical options are either bubble or struggle.

And here is a couple of extra tips: Do not start with all-engineer team. You need at least one business person in the team. Someone that can sell your product or services. Someone that can actually operate the company. You also need one visionary in the team. Whatever approach you choose it is likely that your company reaches full potential in 8-12 years. Not any earlier. If you design your project just for the needs of today you are very likely to end up with an obsolete product before you can even capitalize on it. You also need a person that has his feet stable on the ground. The product needs to start working almost from the day one otherwise you will not be able to gain the momentum. Balancing the vision and the reality is the tough task. Also be prepared to rework parts of your system all the time. No design is ever perfect. Ignoring the refactoring needs and just sprint for the features will inevitably lead to development dead-end. You cannot afford that. That ruins all your investment.The software is never done. Software development never really ends. If it does end then the product itself is essentially dead. Plan for continuous and sustainable development pace during the entire lifetime of your company. Do not copy any existing product. Especially not other open source product. It is pointless. The existing product will always have a huge head start and you cannot realistically ever make that up unless the original team makes some huge mistake. If you need to do something similar than other project already does then team up with them. Or make a fork and start from that. If you really start on a green field you have to use a very unique approach to justify your very existence.

I really wish that someone explained this to me five years ago. We have chosen to follow the self-funded way of course. But we had to explore many business dead-ends to get there. It was not easy. But here we are, alive and well. Good times are ahead. And I hope that this description helps other teams that are just starting their companies. I wish them to have a much smoother start than we had.

(Reposted from https://www.evolveum.com/start-open-source-company/)

Kuppinger ColeExecutive View: SAP Audit Management - 71162 [Technorati links]

October 24, 2014 01:15 PM
In KuppingerCole

Audits are a must for any organization. The massively growing number of ever-tighter regulations in the past years and the overall growing relevance and enforcement of Corporate Governance and, as part of it, Risk Management, has led to an increase in both the number and complexity of audits. These audits affect all areas of an organization, in particular the business departments and IT.


WAYF NewsAalborg Business College now part of WAYF [Technorati links]

October 24, 2014 11:45 AM

Aalborg Business College joined WAYF today. Its students and employees thus now have the ability to log into WAYF-supporting web services using their familiar institutional accounts.

Kuppinger ColeAmazon opens data center in Germany [Technorati links]

October 24, 2014 07:37 AM
In Martin Kuppinger

Today, AWS (Amazon Web Services) announced the opening of their new region, located in Frankfurt, Germany. The new facilities actually contain two availability zones, i.e. at least two distinct data centers. AWS can now provide a local solution to customers in mainland Europe, located close to one of the most important Internet hubs. While on one hand this is important from a technical perspective (for instance, with respect to potential latency issues), it is also an important move from a compliance perspective. The uncertainty many customers feel regarding data protection laws in countries such as Germany, as well as the strictness of these regulations, is a major inhibitor preventing the rapid adoption of cloud services.

Having a region in Germany is interesting not only for German customers, but also for cloud customers from other EU countries. AWS claims that, since they provide a pure-play IaaS (Infrastructure as a Service) cloud service that just provides the infrastructure on which the VMs (virtual machines) and customers’ applications reside, their customers have full control over their own data, especially since the AWS Cloud HSM allows the customers to hold their encryption keys securely in the cloud. This service relies on FIPS 140-2 certified hardware and is completely managed by the customer via a secure protocol. Notably, the customer can decide on where his data resides. AWS does not move customer data outside of the region where the customer places it. With the new region, a customer can design a high availability infrastructure within the EU, i.e. Germany and Ireland.

KuppingerCole strongly recommends that customers encrypt their data in the cloud in a way that allows them to retain control over their keys. However, it must be remembered that the responsibility of the data controller stretches from end to end. It is not simply limited to protecting the data held on a cloud server; it must cover the on-premise, network, and end user devices. The cloud service provider (AWS) is responsible for some but not all of this. The data controller needs to be clear about this division of responsibilities and take actions to secure the whole process, which may involve several parties.

Clearly, all this depends on which services the customers are using and the specific use they make of them. Amazon provides comprehensive information around the data compliance issues; additional information around compliance with specific German Laws is also provided (in German). The AWS approach should allow customers to meet the requirements regarding the geographical location of data and, based on the possession of keys, keep it beyond control of foreign law enforcement. However, there is still a grey area: Amazon operates the hardware infrastructure and hypervisors. There was no information available regarding where the management of this infrastructure is located, whether it is fully done from the German data center, 24×7, or whether there is a follow-the-sun or another remote management approach.

Cloud services offer many potential benefits for organizations. These include flexibility to quickly grow and shrink capacity on demand and to avoid costly hoarding of internal IT capacity. In many cases, technical security measures provided by a cloud service provider exceed those provided on-premise, and the factors inhibiting a move to cloud services are more psychological than technical. However, any business needs to be careful to avoid becoming wholly dependent upon one single supplier.

In sum, this move by Amazon reduces the factors inhibiting German and other European customers from moving to the cloud, at least at the IaaS level. For software companies from outside of the EU offering their solutions based on the AWS infrastructure as cloud services, there is less of a change. Moving up the stack towards SaaS, the questions of who is doing data processing, who is in control of data, or whether foreign law enforcement might bypass European data regulations, are becoming more complex.

Hence, we strongly recommend customers to use a standardized risk-based approach for selecting cloud service providers and ensure that this approach is approved by their legal departments and auditors. While the recent Amazon announcement reduces inhibitors, the legal aspects of moving to the cloud (or not) still require thorough analysis involving experts from both the IT and legal/audit side.

More information on what to consider when moving to the cloud is provided in Advisory Note: Selecting your cloud provider – 70742 and Advisory Note: Security Organization, Governance, and the Cloud – 71151.

October 23, 2014

Kuppinger ColeBig News from the FIDO Alliance [Technorati links]

October 23, 2014 01:25 PM
In Alexei Balaganski

FIDO Alliance (where FIDO stands for Fast IDentity Online) is an industry consortium formed in July 2012 with a goal to address the lack of interoperability among various strong authentication devices. Currently among its members are various strong authentication solution vendors (such as RSA, Nok Nok Labs or Yubico), payment providers (VISA, MasterCard, PayPal, Alibaba), as well as IT industry giants like Microsoft and Google. The mission of the FIDO Alliance has been to reduce reliance on passwords for authentication and to develop specifications for open, scalable and interoperable strong authentication mechanisms.

KuppingerCole has been closely following the progress of FIDO Alliance’s developments for the last couple of years. Initially Martin Kuppinger has been somewhat skeptical about the alliance’s chances to gain enough support and acceptance among the vendors. However, seeing how many new members were joining the alliance, as well as announcements like the first FIDO authentication deployment by PayPal and Samsung earlier this year would confirm their dedication to lead a paradigm shift in the current authentication landscape. It’s not just about getting rid of passwords, but about giving users the opportunity to rely on their own personal digital identities, potentially bringing to an end the current rule of social logins.

After years of collaboration, Universal Authentication Framework and Universal 2nd Factor specifications have been made public in October 2014. This has been closely followed by several announcements from different Alliance members, unveiling their products and solutions implementing the new FIDO U2F standard.

One that definitely made the biggest splash is, of course, Google’s announcement of strengthening their existing 2-step verification with a hardware-based second factor, the Security Key. Although Google has been a strong proponent of multifactor authentication for years, their existing infrastructure is based on one-time codes sent to users’ mobile devices. Such schemes are known to be prone to various attacks and cannot protect users from falling victim to a phishing attack.

The Secure Key (which is a physical USB device manufactured by Yubico) enables much stronger verification based on cryptographic algorithms. This also means that each service has its own cryptographic key, meaning that users can reliably tell a real Google website from a fake one. Surely, this first deployment based on a USB device has its deficiencies as well, for example, it won’t work on current mobile devices, since they all lack a suitable USB port. However, since the solution is based on a standard, it’s expected to work with any compatible authentication devices or software solutions from other alliance members.

Currently, U2F support is available only in Google Chrome browser, but since the standard is backed by such a large number of vendors including major players like Microsoft or Salesforce, I am sure that other browsers will follow soon. Another big advantage of an established standard is availability of libraries to enable quick inclusion of U2F support into existing client applications and websites. Yubico, for example, provides a set of libraries for different languages. Google offers open source reference code for U2F specification as well.

In a sense, this first U2F large-scale deployment by Google is just the first step in a long journey towards the ultimate goal of getting rid of passwords completely. But it looks like a large group sharing the same vision has much more chances to reach that goal earlier that anybody planning to walk all the way alone.

October 22, 2014

Brad Tumy - OracleResetting Forgotten Passwords with @ForgeRock #OpenAM [Technorati links]

October 22, 2014 11:45 PM

Implementing the “Resetting Forgotten Passwords” functionality as described in the OpenAM Developer’s Guide requires some additional custom code.

It’s pretty straight forward to implement this functionality and can be done in 4 steps (per the Developer’s Guide):

  1. Configure the Email Service
  2. Perform an HTTP Post with the user’s id
  3. OpenAM looks up email address (based on user id) and sends an email with a link to reset the password
  4. Intercept the HTTP GET request to this URL when the user clicks the link.

All of this functionality is available out of the box with the exception of #4.  I wrote some really simple javascript that can be deployed to the OpenAM server that will handle this.  This code was written as a proof-of-concept and doesn’t include any data-validation or error handling but that could be added fairly easily.  This script can be deployed to the root directory of your OpenAM deployment.  Just be sure to update the Forgot Password Confirmation Email URL in the OpenAM Console under Configuration > Global > REST Security.

I have made the code available on my GitHub page and you are welcome to use it or modify it.

As described on the README:

The file resetPassword.jsp is an optional file that will display a field for the user to provide their id and will then POST to /json/users?_action=forgotPassword (Step #2 from the Developer’s Guide).


Thanks to @Aldaris and @ScottHeger for providing advice while I was working on this.

Filed under: IdM Tagged: ForgeRock, IdM, OpenAM, REST

Julian BondI want to buy a final edition 7th gen iPod Classic 160 with the v2.0.5 firmware in the UK. I don't mind... [Technorati links]

October 22, 2014 02:31 PM
I want to buy a final edition 7th gen iPod Classic 160 with the v2.0.5 firmware in the UK. I don't mind a few scratches as long as the display is still ok. Anyone?

I need a 7th generation Classic 160 because this went back to a single platter drive and there's a 240Gb disk that fits and works. The previous 6th Gen 160 (which I have) used a dual platter drive with an unusual interface and can't be upgraded.

About 2 years ago, the final 7th Gen Classic iPod was upgraded with slightly different hardware that worked with the final v2.0.5 firmware. If it came with 2.0.4 then it probably can't be upgraded. 2.0.5 is desirable because there's a software setting to disable the EU volume limit. When Apple did all this, they didn't actually update the product codes or SKU# So People will claim they have a 7th Gen MC297QB/A or MC297LL/A and it might or might not be the right one. The only way to be sure is to try and update to 2.0.5

So at the moment I'm chasing several on eBay but having to wait for the sellers to confirm what they're actually selling before putting in bids and losing out. Apparently I'm not alone as prices are rising. The few remaining brand new ones are quoted on "Buy Now" prices at a premium, sometimes twice the final RRP. Gasp!

I f***ing hate Apple for playing all these games. I hate them for discontinuing the Classic 160. I hate that there's no real alternative.

1st world problems, eh? It seems like just recently I keep running up against this. I'm constantly off balance because things I thought were sorted and worked OK, are no longer available. Or the company's gone bust or been taken over. Or the product has been updated and what was good is now rubbish. Or the product is OK, but nobody actually stocks the whole range so you have to buy it on trust over the net.
[from: Google+ Posts]

Julian BondAphex Twin leaking a fake version of Syro a few weeks before the official release was genius. It's spread... [Technorati links]

October 22, 2014 08:55 AM
Aphex Twin leaking a fake version of Syro a few weeks before the official release was genius. It's spread all over the file sharing sites so it's hard to find the real release. The file names match[1]. The music is believeable but deliberately lacks lustre. It's really a brilliant pastiche of an Aphex Twin album as if some Russian producer has gone out of their way to make an homage example of what they thought Aphex Twin was doing.


[1] What's a bit weird though is that the MP3 files not only have the same filenames but seem to be the same size and have the same checksum.
 EB Reviews: Fake 'Syro' – Electronic Beats »
Am I one of those people who can't tell the difference between authentic Aphex tunes and sneering knockoffs?

[from: Google+ Posts]
October 21, 2014

CourionData Breach? Just Tell It Like It Is [Technorati links]

October 21, 2014 10:25 PM

Access Risk Management Blog | Courion

Wired Innovation InsightsKurt Johnson, Vice President of Strategy and Corporate Development, has posted a blog on Wired Innovations Insight titled, Data Breach? Just Tell It Like It Is.

In the post, Kurt discusses the negative PR implications of delayed breach disclosure and recommends improving your breach deterrence and detection capabilities by continuously monitoring identity and access activity for anomalous patterns and problems, such as orphan accounts, duties that need to be segregated, ill-conceived provisioning or just unusual activity.

Read the full post now.


Neil Wilson - UnboundIDUnboundID LDAP SDK for Java 2.3.7 [Technorati links]

October 21, 2014 05:32 PM

We have just released the 2.3.7 version of the UnboundID LDAP SDK for Java. You can get the latest release online at the UnboundID Website, the SourceForge project page, or in the Maven Central Repository.

Complete release note information is available online at on the UnboundID website, but some of the most significant changes include:

OpenID.netNotice of Vote for Errata to OpenID Connect Specifications [Technorati links]

October 21, 2014 05:43 AM

The official voting period will be between Friday, October 31 and Friday, November 7, 2014, following the 45 day review of the specifications. For the convenience of members, voting will actually open a week before Friday, October 31 on Friday, October 24 for members who have completed their reviews by then, with the voting period still ending on Friday, November 7, 2014.

If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration.
A description of OpenID Connect can be found at http://openid.net/connect/. The working group page is http://openid.net/wg/connect/.

The vote will be conducted at https://openid.net/foundation/members/polls/86.

– Michael B. Jones, OpenID Foundation Secretary

OpenID.netNotice of Vote for Implementer’s Draft of OpenID 2.0 to OpenID Connect Migration Specification [Technorati links]

October 21, 2014 05:38 AM

The official voting period will be between Friday, October 31 and Friday, November 7, 2014, following the 45 day review of the specification. For the convenience of members, voting will actually open a week before Friday, October 31 on Friday, October 24 for members who have completed their reviews by then, with the voting period still ending on Friday, November 7, 2014.

If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration.

A description of OpenID Connect can be found at http://openid.net/connect/. The working group page is http://openid.net/wg/connect/.

The vote will be conducted at https://openid.net/foundation/members/polls/81.

– Michael B. Jones, OpenID Foundation Secretary

October 20, 2014

Radovan Semančík - nLightProject Provisioning with midPoint [Technorati links]

October 20, 2014 01:09 PM

Evolveum midPoint is a very unique Identity Management (IDM) system. MidPoint is a robust open source provisioning solution. Being an open source the midPoint is developed in a fairly rapid, incremental and iterative fashion. And the recent version introduced a capability that allows midPoint to reach beyond the traditional realm of identity management.

Of course, midPoint is great in managing and synchronizing all types of identities: employees, contractors, temporary workers, customers, prospects, students, volunteers - you name it, midPoint does it. MidPoint can also manage and synchronize functional organizational structure: divisions, departments, sections, etc. Even though midPoint does it better than most other IDM systems these features are not exactly unique just by themselves. What is unique about midPoint is that midPoint refines these mechanisms into generic and reusable concepts. MidPoint mechanisms are carefully designed to work together. This makes midPoint much more than just a sum of its parts.

One interesting consequence of this development approach is unique ability to provision projects. We all know the new lean project-oriented enterprises. The importance of traditional tree-like functional organizational structure is diminished and flat project-based organizational structure takes the lead. Projects govern almost any aspect of the company life from development of a new groundbreaking product to the refurbishing an office space. The projects are created, modified and closed almost on a daily basis. This is usually done manually by system administrators: create a shared folder on a file server, set up proper access control lists, create a distribution list, add members, create new project group in Active Directory, add members, create an entry in the task tracking system, bind it with the just-created group in Active Directory ... It all takes a couple of days or weeks to be done. This is not very lean, is it?

MidPoint can easily automate this process. Projects are yet another type of organizational units that midPoint manages. MidPoint can maintain an arbitrary number of parallel organizational structures. Therefore adding an orthogonal project-based structure to existing midPoint deployment is a piece of cake. MidPoint also supports membership of a single user in arbitrary number of organizational units therefore this efficiently creates a matrix organizational structure. As midPoint projects are just organizational units they can easily be synchronized with other systems. MidPoint can be configured to automatically create proper groups, distribution lists and entries in the target systems. And as midPoint knows who are the members of the project it can also automatically add correct accounts to the groups it has just created.

Project provisioning

However, this example is just too easy. MidPoint can do much more. And anyway, the modern leading-edge lean progressive organizations are not only project-based but also customer-oriented. The usual requirement is not only to support internal project but especially the customer-facing projects. Therefore I have prepared a midPoint configuration that illustrates automated provisioning of such customer projects.

The following screenshot illustrates the organizational structure maintained in midPoint. It shows a project named Advanced World Domination Program (or AWDP for short). The project is implemented for a customer ACME, Inc.. The project members are jack and will. You can also see another customer and a couple of other projects there. The tabs also shows different (parallel) organizational structures maintained by midPoint. But now we only care about the Customers structure.

Project provisioning: midPoint

MidPoint is configured to replicate the customer organizational unit to LDAP. Therefore it will create entry ou=ACME,ou=customers,dc=example,dc=com. It is also configured to synchronize the project organizational unit to the LDAP as an ldap group. Therefore it will create LDAP entry cn=AWDP,ou=ACME,ou=customers,dc=example,dc=com with a proper groupOfNames object class. MidPoint is also configured to translate project membership in its own database into a group membership in LDAP. Therefore the cn=AWDP,... group contains members uid=jack,... and uid=will,....

Project provisioning: LDAP

Similar configuration is used to synchronize the projects to GitLab. The customers are transformed to GitLab groups and GitLab projects are created in accord with midPoint projects.

Project provisioning: GitLab

... and project members are correctly set up:

Project provisioning: GitLab

All of this is achieved by using a relatively simple configuration. The configuration consist of just four XML files: one file to define access to each resource, one file to define customer meta-role and one for customer project meta-role. No custom code is used except for a couple of simple scriptlets that count just several lines of Groovy. Therefore this configuration is easily maintainable and upgradeable.

And midPoint can do even more still. As midPoint has a built-in workflow engine it can easily handle project membership requests and approvals. MidPoint has a rich delegated administration capability therefore management of project information can be delegated to project managers. MidPoint synchronization is bi-directional and flexible one system can be authoritative for some part of project data (e.g. scope and description) and another system for a different type of data (e.g. membership). And so on. The possibilities are countless.

This feature is must have for any lean project-oriented organization. Managing projects manually is the 20th-century way of doing things and it is a waste of precious resources. Finally midPoint is here to bring project provisioning into the 21st century. And it does it cleanly, elegantly and efficiently.

(Reposted from https://www.evolveum.com/project-provisioning-midpoint/)

Vittorio Bertocci - MicrosoftTechEd Europe, and a Quick Jaunt in UK & Netherlands [Technorati links]

October 20, 2014 08:07 AM

TEEU_2014_I’m speaking_1

I can’t believe TechEd Europe is already a mere week away!

This year I have just 1 session, but TONS of new stuff to cover… including news we didn’t break yet! 75 mins will be very tight, but I’ll do my best to fit everything in. If you want to put it in the calendar, here there are the coordinates:

DEV-B322 Building Web Apps and Mobile Apps Using Microsoft Azure Active Directory for Identity Management
Thursday, October 30 5:00 PM – 6:15 PM, Hall 8.1 Room I

As usual, I would be super happy to meet you during the event. I’m scheduled to arrive on Monday evening, and I plan to stay in Barcelona until Friday morning – there’s plenty of time to sync up, if we plan in advance. Feel free to contact me here.
Also, I plan to be at ask the expert night – it will be right after my talk, hence that should provide a nice extension to the QA.

Given that I don’t often cross the pond anymore these days, the week after I’ll stick around and spend 3 days in UK (Reading) and 2 days in Amsterdam. (I’ll also spend the weekend in Italy, but I doubt you want to hear about identity development on a Saturday Smile). I believe the agenda is already pretty full, but if you are in the area and interested feel free to ping me – we can definitely check with the local teams if we can still find a slot.

Looking forward to meet and chat! Smile

October 18, 2014

Nat Sakimuraオンラインサービスにおける消費者のプライバシーに配慮した情報提供・説明のためのガイドラインを策定しました(METI/経済産業省) [Technorati links]

October 18, 2014 11:08 PM



これは、一昨年のIT融合フォーラム パーソナルデータワーキング・グループの検討結果と、それに引き続き昨年度行われた事業者事前相談の試行を通じて作成された『消費者への情報提供・説明を充実させるための「基準」』を受けて策定されたものです。消費者からパーソナルデータの提供を受ける場合に、どのように通知したらよいか、目的や提供範囲の変更に際してはどうすべきか、などをまとめています。

このような取組は各国で始まっており、特にオンラインの場合はあまりバラバラになると事業者の対応が大変になるので、国際的な調和も求められます。その一環として、ISO/IECにもこれを提出する予定になっており、10月20日から始まるISO/IEC JTC1 SC 27/WG 5 メキシコ会合で、Study Periodの提案が行われることになっています。






Nat Sakimura消費者の金融取引の安全性向上のための大統領令発布 – クレジットカードのICカード化や政府サイトの多要素認証対応など [Technorati links]

October 18, 2014 10:29 PM


Executive Order   Improving the Security of Consumer Financial Transactions   The White House


日本や欧州では、クレジットカードはICチップ付きが主流になっていますが、米国ではまだまだ普及には程遠い状況です。この大統領令のSection 1は、この状況を改善するためのものです。政府機関でのクレジットカードの決済がICチップベースになることで、クレジットカード発行者がICチップ付きのものを発行するようになるための呼び水となることを狙っています。

おりしも同日 Daily Telegraph に

Sorry Mr President; your credit card has been declined
Barack Obama’s card rejected at trendy New York restaurant Estela






(出所)Rosa Prince: “Sorry Mr President; your credit card has been declined”, Daily Telegraph, 2014/10/17

仕込み記事乙、という感じでありますが、それだけ本気ということでしょう。ちなみに、この話は、AP電/Fox Newsなんかにも出ています。セキュリティの話しじゃみんな読まないけど、オバマ大統領がカード使えなかった!という俗な話にすればみんな読むだろうという読みのもと、メディア戦略うまいですね。

アイデンティティ窃盗はだいぶ前から社会問題になっていました。Section 2は、それに対する対策ですね。具体的な対策と言うよりは、対策を立てなさいという命令ですが。

そしてSection 3.が、政府サイトへのアクセスで、本人が個人情報にアクセスしたりするときのセキュリティレベルを上げ、それによってプライバシーの保護を向上させるというものです。ホワイトハウス筋から事前に聞いていたところによると、これもSection 1.と同じで、民間に対する呼び水にすることを狙っているそうです。今後、これに対応するためにSP800-63の改定もあり得るようです[4]。18ヶ月と切ってあるのは、FCCXのインプリがそれまでに済むということですかね。





[1] Executive Order –Improving the Security of Consumer Financial Transactions, http://www.whitehouse.gov/the-press-office/2014/10/17/executive-order-improving-security-consumer-financial-transactions

[2] National Strategy for Trusted Identity in Cyberspace

[3] chip-and-pin payment system

[4] 現行のSP800-63だと多要素認証はLoA3になるが、LoA3の身元確認を要求するのは多分酷なので、LoA2で多要素認証を要求するようになるとか、あるいは、クレデンシャルのレベルと身元確認のレベルを分離させるとかするんじゃないでしょうか。


Anil JohnA Simple Framework for Trusted Identities [Technorati links]

October 18, 2014 10:15 AM

What does it take to enable a person to say who they are in the digital world while having the same confidence, protections and rights that they expect in the real world? This guest post by Tim Bouma explores the question in a manner that is relevant across jurisdictions, independent of delivery channels and technology neutral.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Mike Jones - MicrosoftJOSE -35 and JWT -29 drafts addressing AppsDir review comments [Technorati links]

October 18, 2014 01:29 AM

IETF logoI’ve posted updated JOSE and JWT drafts that address the Applications Area Directorate review comments. Thanks to Ray Polk and Carsten Bormann for their useful reviews. No breaking changes were made.

The specifications are available at:

HTML formatted versions are available at:

October 17, 2014

Kantara InitiativeKantara Initiative Helps Accelerate Time-to-Market for Digital Citizen Services [Technorati links]

October 17, 2014 03:17 PM

Premier US Approved Trust Framework Provider (TFP) supports the Presidential Executive Order and the vision of the US National Strategy for Trusted Identities in Cyberspace

Piscataway, NJ, October 17, 2014 – Kantara Initiative, the premier US ICAM Approved Trust Framework Provider (TFP) approving 3rd party Credential Service Providers (CSPs), is positioned to support today’s Presidential Executive Order and the vision of the US National Strategy for Trusted Identities in Cyberspace (NSTIC).

Kantara Initiative helps to connect and accelerate identity services time-to-maket by enabling trusted on-line transactions through our innovations and compliance programs.

Kantara’s Identity Assurance Program provides technical and policy approvals of businesses that want to connect to US Federal Government Agencies and commercial markets, ensuring companies like CA Technologies, Experian, ForgeRock, and Radiant Logic can offer compelling new government services while also safely and securely complying with government technical rules and regulations.

Board of Trustee President, Allan Foster (ForgeRock) says, “ForgeRock is thrilled to support the US Presidential Executive Order to drive identity services innovation. Kantara Members are positioned at the strategic intersection where relationship based identity services meet trust and usability.”

“As the first 3rd party assessed Kantara Approved Identity Proofing Service Component, Experian continues to innovate using its proven industry position to deliver identity proofing services at the highest industry standard,” said Kantara Board of Trustee Vice President Philippe de Raet (Experian)

Kantara Trustee Members IEEE-SA, Internet Society, and NRI provide a solid foundation of expertise developing open standards that connect unique opportunities via shared resources and know-how.

“The Internet Society is pleased to see this commitment, at the highest level, to steps that will drive the digital identity infrastructure to mature more rapidly, with a focus on security and authentication that moves us beyond the era of user IDs and passwords,” said Kantara Board Member Robin Wilton (ISOC).

Join Kantara Initiative to influence strategy and grow markets where business, research and innovation intersect.

About Kantara Initiative: Kantara Initiative accelerates identity services markets by developing innovations and programs to support trusted on-line transactions. The membership of Kantara Initiative includes international communities, industry, research & education, and government stakeholders. Join. Innovate. Trust. http://kantarainitiative.org





OpenID.netThe Name is the Thing: “The ARPU of Identity” [Technorati links]

October 17, 2014 01:49 PM

The name is the thing. The name of this Open Identity Exchange White Paper, the “ARPU of Identity”, is deliberate. ARPU, Average Revenue Per User, is one metric telcos use to measure success. By deliberately using a traditional lens that telcos use, this paper puts emerging Internet identity markets into a pragmatic perspective. The focus of the white paper is on how mobile network operators (MNOs) and other telcos can become more involved in the identity ecosystem and thereby improve their average revenue per user, or ARPU. This perspective continues OIX’s “Economics of Identity” series, or as some call it the “how do we make money in identity” tour in the emerging Internet identity ecosystem. OIX commissioned a white paper reporting the first quantitative analysis of Internet identity market in the UK, where HMG Cabinet Office hosted workshops on the topic at KPMG’s headquarters in London and at the University of Washington’s Gates Center in Seattle.

The timing of this paper on business interoperability is coincidental with work groups in the OpenID Foundation developing the open standards that MNOs and other telco players will use to ensure technical interoperability. GSMA’s leadership with OIX on pilots in the UK Cabinet Office Identity Assurance Program and in the National Strategy on Trusted identity in Cyberspace offer opportunities to test both business and technical interoperability leveraging open standards built on OpenID Connect. The timing is the thing. The coincidence of white papers, workshops and pilots in the US, UK and Canada with leading MNOs provides a real-time opportunity for telcos to unlock their unique assets to increase ARPU and protect the security and privacy of their subscribers/citizen.

In my OpenID Foundation blog, I referenced Crossing the Chasm, where Geoffrey A. Moore argues there is a chasm between future interoperability that technology experts build into standards and the pragmatic expectations of the early majority. OIX White Papers, workshops and pilots help build the technology tools and governance rules needed for the interoperability to successfully cross the “chasm.”

Several OIX White Papers speak to the “supply side” how MNOs and others can become Identity Providers (IDPs), Attribute or Signal Providers in Internet identity markets. Our next OIX White Paper borrows an industry meme (and T-Shirt) for its title, “There’s No Party Like A Relying Party”. That paper speaks to the demand side. Relying Parties, (RPs) like banks, retailers and others rely on identity attributes and account signals to better serve and secure customers and their accounts rely on technical, business and legal interoperability.

By looking at the “flip sides” of supply and demand, OIX White Papers help us better understand the ARPU, the needs for privacy and security and the economics of identity.


OpenID.netCrossing the Chasm of Consumer Consent [Technorati links]

October 17, 2014 01:47 PM

This week Open Identity Exchange publishes a white paper on the “ARPU of Identity”.   The focus of the white paper is on how MNOs and telecommunications companies can monetize identity markets and thereby improve their average revenue per user, or ARPU.   Its author and highly regarded data scientist, Scott Rice, makes a point that caught my eye. It’s the difficulty in federating identity systems because consumer consent requirements and implementations vary widely and are a long way from being interoperable. It got my attention because Open Identity Exchange and the GSMA lead pilots in the US and UK with leading MNOs with funding in part from government. The National Strategy on Trusted identity in Cyberspace and UK Cabinet Office Identity Assurance Program are helping fund pilots that may address these issues. Notice and consent involves a governmental interest in protecting the security and privacy of its citizens online. It’s a natural place for the private sector to leverage the public-private partnerships Open Identity Exchange has helped lead.

Notice and consent laws have been around for years.  The Organization for Economic Co-operation and Development, or OECD, first published their seminal seven Privacy Guidelines in 1980.  But in 1980, there was no world wide web nor cell phone.  Credit bureaus, as we know them today, didn’t exist; no “big data” or data brokers collecting millions of data points on billions of people.  What privacy law protected then was very different than what it needs to protect now.  Back then, strategies to protect consumers were based on the assumption of a few transactions each month, not a few transactions a day.  OECD guidelines haven’t changed in the last 34 years. Privacy regulations and, specifically, the notice and consent requirements of those laws lag further and further behind today’s technology.

In 2013 (and updated in March of this year), OIX Board Member company Microsoft, and Oxford University’s Oxford Internet Institute (OII) published a report outlining recommendations for revising the 1980 OECD Guidelines.  Their report makes recommendations for rethinking how consent should be managed in the internet age.  It makes the point that expecting data subjects to manage all the notice and consent duties of their digital lives in circa 2014 is unrealistic if we’re using rules developed in 1980.  We live in an era where technology tools and governance rules assume the notice part of “notice and consent” requires the user to agree to a privacy policy.  The pragmatic choice is to trust our internet transactions to “trusted” Identity Providers (IDPs), Service Providers (SPs) and Relying Parties (RPs). The SPs, RPs, IDPs, government and academic organizations that make up the membership of Open Identity Exchange share at least one common goal: increasing the volume, velocity and variety of trusted transactions on the web.

The GSMA, Open Identity Exchange and OpenID Foundation are working on pilots with industry leading MNOs, IDPs and RPs to promote interoperability, federation, privacy and respect for the consumer information over which they steward.  The multiple industry sectors represented in OIX are building profiles to leverage the global adoption of open standards like Open ID Connect. Open identity standards and private sector led public-private partnership pilots help build the business, legal and technical interoperability needed to protect customers while also making the job of being a consumer easier.

Given the coincidence of pilots in the US, UK and Canada over the coming months, it is increasingly important to encourage government and industry leaders and privacy advocates to build on interoperability and standardization of consumer consent and privacy baked into standards like OpenID Connect brings to authentication.


October 16, 2014

Kuppinger ColeIAM for the User: Achieving Quick-wins in IAM Projects [Technorati links]

October 16, 2014 07:35 PM
In KuppingerCole Podcasts

Many IAM projects struggle or even fail because demonstrating their benefit takes too long. Quick-wins that are visible to the end users are a key success factor for any IAM program. However, just showing quick-wins is not sufficient, unless there is a stable foundation for IAM delivered as result of the IAM project. Thus, building on an integrated suite that enables quick-wins through its features is a good approach for IAM projects.

Watch online

MythicsIt’s Time to Upgrade to Oracle Database 12c, Here is Why, and Here is How [Technorati links]

October 16, 2014 06:16 PM

Well, it's that time again, when the whole Oracle database community will be dealing with the questions around upgrading to Database 12c from 11g (and some…

Kuppinger ColeMobile, Cloud, and Active Directory [Technorati links]

October 16, 2014 04:56 PM
In Martin Kuppinger

Cloud IAM is moving forward. Even though there is no common understanding of which features are required, we see more and more vendors – both start-ups and vendors from the traditional field of IAM (Identity and Access Management) – entering that market. Aside from providing an alternative to established on-premise IAM/IAG, we also see a number of offerings that focus on adding new capabilities for managing external users (such as business partners and consumers) and their access to Cloud applications – a segment we call Cloud User and Access Management.

There are a number of expectations we have for such solutions. Besides answers on how to fulfill legal requirements regarding data protection laws, especially in the EU, there are a number of other requirements. The ability to manage external users and customers with flexible login schemes and self-registration, inbound federation of business partners and outbound federation to Cloud services, and a Single Sign-On (SSO) experience for users are among these. Another one is integration back to Microsoft Active Directory and other on-premise identity stores. In general, being good in hybrid environments will remain a key success factor and thus a requirement for such solutions in the long run.

One of the vendors that have entered the Cloud IAM market is Centrify. Many will know Centrify as a leading-edge vendor in Active Directory integration of UNIX, Linux, and Apple Macintosh systems. However, Centrify has grown beyond that market for quite a while, now offering both a broader approach to Privilege Management with its Server Suite and a Cloud User and Access Management solution with its User Suite.

In contrast to other players in the Cloud IAM market, Centrify takes a somewhat different approach. On one hand, they go well beyond Cloud-SSO and focus on strong integration with Microsoft Active Directory, including supporting Cloud-SSO via on-premise AD – not a surprise when viewing the company’s history. On the other hand, their primary focus is on the employees. Centrify User Suite extends the reach of IAM not only to the Cloud but also to mobile users.

This makes Centrify’s User Suite quite different from other offerings in the Cloud User and Access Management market. While they provide common capabilities such as SSO to all type of applications, integration with the Active Directory, capabilities for both strong authentication of external users, and provisioning to Cloud/SaaS applications, their primary focus is not on simply extending this to external users. Instead, Centrify puts its focus on extending their reach to supporting both Cloud and Mobile access, provided by a common platform, delivered as a Cloud service.

This approach is unique, but it makes perfect sense for organizations that want to open up their enterprises to both better support mobile users as well as to give easy access to Cloud applications. Centrify has strong capabilities in mobile management, providing a number of capabilities such as MDM (Mobile Device Management), mobile authentication, and integration with Container Management such as Samsung Knox. All mobile access is managed via consistent policies.

Centrify User Suite is somewhat different from the approach other vendors in the Cloud User and Access Management market took. However, it might be the single solution that fits best to the needs of customers, particularly when they are primarily looking at how to enable their employees for better mobile and Cloud access.

Matthew Gertner - AllPeersHow To Decide When It Has Come Time To Bug Out [Technorati links]

October 16, 2014 04:34 PM

When to bugoutThere are a variety of circumstances and situations which will require you to bug out of your home or temporary shelter. For those of you who are unfamiliar, the term bug out refers to when your position has been compromised and you need to move on. Although this term found it’s origins in army and soldier slang, it still rings true for emergency situations. If you and your family are held up after a natural disaster and the damage has been catastrophic, it may be time for you to move on and find a better place to stay. Considering a variety of factors, you will have to make the tough decision to leave your possessions and other valuables behind in favour of a safer environment. Because you can’t take everything with you, having a bug out bag (along with a first-aid kit and food stockpile) is essential for every family. You will need to consider the unique circumstances that surround your disaster to assess when the proper time to bug out is.

Has The Natural Disaster Caused Significant Damage To Your Home And Community

After any disaster, it’s your responsibility to step outside and assess the damage. In some places, tornados and hurricanes can level entire city blocks after a period of only two to five minutes. If your community and home have been destroyed and you’re able to recognize this from the safety of your hideaway, it may be time to bug out and find emergency shelters set up somewhere else. Connecting with other neighbors who are contemplating the same situation can give you a network of people to move with, therefore protecting yourselves from people who are in panic mode and looking to steal from families. By keeping your cool and calmly letting your family know of the situation up top, you can all come to a collective agreement about what your next steps should be.

Do You Feel Unsafe In Your Home Or Shelter

There will come times when you’ll need to bug out from shelters as well, which depends on a variety of different circumstances. If you’ve been held up at home for a while and more damage starts appearing, then it may be time for you to grab the kit and family and move everyone to a safer location. A home that has been destroyed above can still pose some serious threats, especially if your safe zone is located underground. Additional collapse of your home could trap you inside and rescue crews may not reach you for months on end. This is why it is crucial to make a decision early on, for the best possible reasons. Just because you are attached to your home doesn’t mean it’s the end of the world. Material possessions, just like homes, can be replaced. What matters is if your family is in immediate danger by staying in the same spot. Always consider bugging out a viable option.

For more information about bugout bags, visit uspreppers.com


OpenID.netCrossing the Chasm In Mobile Identity: OpenID Foundation’s Mobile Profile Working Group [Technorati links]

October 16, 2014 03:45 PM

Mobile Network Operators (MNOs) worldwide are in various stages of “crossing the chasm” in the Internet identity markets. As Geoffrey A. Moore noted in his seminal work, the most difficult step is making the transition between early adopters and pragmatists. The chasm crossing Moore refers to points to the bandwagon effect and the role standards play as market momentum builds.

MNOs are pragmatists. As they investigate becoming identity providers, open standards play a critical role in how they can best leverage their unique technical capabilities and interoperate with partners. The OpenID Foundation’s Mobile Profile Working Group aims to create a profile of OpenID Connect tailored to the specific needs of mobile networks and devices thus enabling usage of operator ID services in an interoperable way.

The Working Group starts with the challenge that OpenID Connect relies on the e-mail address to determine a user’s OpenID provider (OP). In the context of mobile identity, the mobile phone number or other suitable mobile network data are considered more appropriate. The working group will propose extensions to the OpenID discovery function to use this data to determine the operator’s OP, while taking care to protect data privacy, especially the mobile phone number. We are fortunate the working group is led by an expert in ‘crossing the chasm’ of email and phone number interoperability, Torsten Lodderstedt, Head of Development of Customer Platforms at Deutsche Telekom who is also an OpenID Foundation Board member.

The Working Group’s scope is global as geographic regions are typically served by multiple, independent mobile network operators including virtual network operators. The number of potential mobile OPs a particular relying party needs to setup a trust relationship with will likely be very high. The working group will propose an appropriate and efficient model for trust and client credential management based on existing OpenID Connect specifications. The Foundation is collaborating with the Open Identity Exchange to build a trust platform that combines the “rules and tools” necessary to ensure privacy, operational, and security requirements of all stakeholders.

Stakeholders, like service providers, may likely have different requirements regarding authentication transactions. The OpenID Connect profile will also define a set of authentication policies operator OP’s are recommended to implement and service providers can choose from.

This working group has been setup in cooperation with OpenID Foundation member, the GSMA, to coordinate with the GSMA’s mobile connect project. We are fortunate that David Pollington, Senior Director of Technology at GSMA, and his colleagues have been key contributors to the Working Group’s charter and will ensure close collaboration with GSMA members. There is an importance coincidence of the GSMA and OIX joint leadership of mobile identity pilots with leading MNOs in the US and UK. All intermediary working group results will be proposed to this project and participating operators for adoption (e.g. in pilots) but can also be adopted by any other interested parties. The OIX and GSMA pilots in the US and UK can importantly inform the OIDF work group standards development process. That work on technical interoperability is complemented by work on “business interoperability.” OIX will publish a white paper tomorrow, “The ARPU of Identity”, that speaks to the business challenges MNOs face leveraging the highly relevant and unique assets in Internet identity.

The OpenID Foundation Mobile Profile Working Group’s profile builds on the worldwide adoption of OpenID Connect. The GSMA and OIX pilots offer an International test bed for both business and technical interoperability based on open standards. Taking together with the ongoing OIX White Papers and Workshops on the “Economics of Identity”, “chasm crossing” is within sight of the most pragmatic stakeholders.


Ludovic Poitou - ForgeRockPOODLE SSL Bug and OpenDJ [Technorati links]

October 16, 2014 01:40 PM

A new security issue hit the streets this week: the Poodle SSL bug. Immediately we’ve received a question on the OpenDJ mailing list on how to remediate from the vulnerability.
While the vulnerability is mostly triggered by the client, it’s also possible to prevent attack by disabling the use of SSLv3 all together on the server side. Beware that disabling SSLv3 might break old legacy client applications.

OpenDJ uses the SSL implementation provided by Java, and by default will allow use of all the TLS protocols supported by the JVM. You can restrict the set of protocols for the Java VM installed on the system using deployment.properties (on the Mac, using the Java Preferences Panel, in the Advanced Mode), or using environment properties at startup (-Ddeployment.security.SSLv3=false). I will let you search through the official Java documentations for the details.

But you can also control the protocols used by OpenDJ itself. If you want to do so, you will need to change settings in several places :

For example, to change the settings in the LDAPS Connection Handler, you would run the following command :

# dsconfig set-connection-handler-prop --handler-name "LDAPS Connection Handler" \
--add ssl-protocol:TLSv1 --add ssl-protocol:TLSv1.1 --add ssl-protocol:TLSv1.2 \
-h localhost -p 4444 -X -D "cn=Directory Manager" -w secret12 -n

Repeat for the LDAP Connection Handler and the HTTP Connection Handler.

For the crypto manager, use the following command:

# dsconfig set-crypto-manager-prop \
--add ssl-protocol:TLSv1 --add ssl-protocol:TLSv1.1 --add ssl-protocol:TLSv1.2 \
-h localhost -p 4444 -X -D "cn=Directory Manager" -w secret12 -n

And for the Administration Connector :

# dsconfig set-administration-connector-prop \
--add ssl-protocol:TLSv1 --add ssl-protocol:TLSv1.1 --add ssl-protocol:TLSv1.2 \
-h localhost -p 4444 -X -D "cn=Directory Manager" -w secret12 -n

All of these changes will take effect immediately, but they will only impact new connections established after the change.

Filed under: Directory Services Tagged: directory, directory-server, ForgeRock, opendj, poodle, security, ssl, vulnerability
October 15, 2014

Julian BondHilarious bit of spam email today. [Technorati links]

October 15, 2014 07:52 PM
Hilarious bit of spam email today.

Are you a business man or business woman, politician, musical, student and you want to be very rich,powerful and be famous in life. You can achieve your dreams by been a member of the Illuminati. With this all your dreams and heart desire can be fully accomplish, Illuminati cult online today and get instant sum of $25,000monthly for becoming a member and $100,000 for doing what you like to do . so if you have the interest, you can call, +447064249899 or +447053824724 

But I'm having trouble finding any 5s in 781, fnord.
[from: Google+ Posts]

Nat Sakimura東大は5位から28位に! トップはハーバード大学。ダイヤモンド「使える人材輩出大学」ランキングを再計算してみた [0] [Technorati links]

October 15, 2014 12:06 PM










表1 ー 改訂版使える人材排出大学

Rank ダイヤモンド
大学名 使える
1 25 ハーバード大学 24 0 24 100.00%
2 28 国際教養大学 16 0 16 100.00%
3 6 東京工業大学 496 79 417 86.26%
4 11 国際基督教大学 209 41 168 83.60%
5 4 一橋大学 580 115 465 83.45%
6 27 広島大学 22 5 17 81.48%
7 1 慶応義塾大学 2170 536 1634 80.19%
8 9 東北大学 250 62 188 80.13%
9 7 大阪大学 374 108 266 77.59%
10 3 京都大学 1041 328 713 76.04%
11 10 東京理科大学 256 81 175 75.96%
12 13 神戸大学 163 52 111 75.81%
13 24 津田塾大学 50 18 32 73.53%
14 2 早稲田大学 1838 684 1154 72.88%
15 22 電気通信大学 57 22 35 72.15%
16 30 京都工芸繊維大学 18 7 11 72.00%
17 14 北海道大学 144 59 85 70.94%
18 29 小樽商科大学 21 9 12 70.00%
19 21 東京外国語大学 65 28 37 69.89%
20 12 同志社大学 231 105 126 68.75%
21 16 名古屋大学 113 52 61 68.48%
22 17 関西学院大学 140 82 58 63.06%
23 20 九州大学 110 66 44 62.50%
24 8 明治大学 520 322 198 61.76%
25 25 千葉大学 68 44 24 60.71%
26 22 横浜国立大学 123 88 35 58.29%
27 15 上智大学 230 165 65 58.23%
28 5 東京大学 1596 1161 435 57.89%
29 19 中央大学 217 171 46 55.93%
30 18 筑波大学 103 82 21 55.68%
31 30 立教大学 139 128 11 52.06%
32 44 関西大学 108 138 -30 43.90%
33 60 青山学院大学 166 307 -141 35.10%
34 61 日本大学 239 457 -218 34.34%
35 43 南山大学 18 46 -28 28.13%
36 55 成蹊大学 41 114 -73 26.45%
37 62 法政大学 125 348 -223 26.43%
38 35 京都産業大学 11 31 -20 26.19%
39 50 國學院大學 23 67 -44 25.56%
40 56 近畿大学 38 119 -81 24.20%
41 58 獨協大学 27 116 -89 18.88%
42 34 成城大学 6 26 -20 18.75%
43 51 専修大学 12 62 -50 16.22%
44 49 お茶の水女子大学 10 53 -43 15.87%
45 41 埼玉大学 6 33 -27 15.38%
46 54 東海大学 14 78 -64 15.22%
47 48 東洋大学 9 51 -42 15.00%
48 37 群馬大学 4 27 -23 12.90%
49 59 学習院大学 22 151 -129 12.72%
50 57 明治学院大学 13 95 -82 12.04%
51 53 駒沢大学 8 64 -56 11.11%
52 39 鳥取大学 2 26 -24 7.14%
53 52 帝京大学 4 55 -51 6.78%
54 40 高崎経済大学 2 28 -26 6.67%
55 33 宇都宮大学 0 20 -20 0.00%
56 32 関東学院大学 0 20 -20 0.00%
57 36 明星大学 0 21 -21 0.00%
58 38 茨城大学 0 24 -24 0.00%
59 42 亜細亜大学 0 28 -28 0.00%
60 45 文教大学 0 37 -37 0.00%
61 47 国士舘大学 0 39 -39 0.00%
62 46 名城大学 0 39 -39 0.00%




[0] ブログアップしてから『東大は5位から29位に! 「使える人材輩出大学」ランキングを再計算してみた』http://blogos.com/article/96472/ というBLOGOSの記事が目に入った。手元の計算だと、東大は28位だったので、あえてタイトルを真似っ子バージョンに改定してみた。元のタイトルは、『ダイヤモンド「使える人材排出大学ランキング」←ダメでしょ』

[1] 『【速報】 仕事で使える大学ランキングの「ワースト1」発表』http://netgeek.biz/archives/23399

[2] 「使える/使えない」というのが、おそらくその職場にいる人達の中での相対評価であろうことから、それを同じ座標において比較することも意味がよくわからないが、それはおいておいたとしても。



Kuppinger Cole09.12.2014: Access Governance for today’s agile, connected businesses [Technorati links]

October 15, 2014 09:29 AM
In KuppingerCole

In today’s fast changing world the digitalization of businesses is essential to keep pace. The new ABC – Agile Businesses Connected – is the new paradigm organizations must follow. They must connect to their customers, partners and associates. They must become agile to respond to the changing needs of the market. They must understand, manage, and mitigate the risks in this connected world. One important aspect of this is the governance of the ever-increasing number of identities – customers,...
October 14, 2014

Radovan Semančík - nLightThe Old IDM Kings Are Dead. Long Live the New Kings. [Technorati links]

October 14, 2014 06:04 PM

It can be said that Identity Management (IDM) was born in early 2000s. That was the time when many people realized that a single big directory just won't do it. They realized that something different was needed to bring order into the identity chaos. That was the dawn of a user provisioning system. Early market was dominated by only a handful of small players: Access360, Business Layers, Waveset and Thor. Their products were the children of the dot-com age: enterprise software built on the state-of-the-art platforms such as J2EE. These products were quite terrible by todays standards. But they somehow did the job that no other software was able to do. Therefore it is obvious that these companies got acquired very quickly. Access360 is now IBM Tivoli product. Business Layers was acquired by Netegrity which was later acquired by CA. Waveset was taken by Sun. And Thor ended up in Oracle. By 2005 the market was "consolidated" again.

The development of all the early products went on. A lot of new features was introduced. Also some new players entered the market. Even Microsoft hastily hopped on this bandwagon. And the market became quite crowded. What started as a provisioning technology later became "compliance" and "governance" to distinguish individual products. And even more features were added. But the basic architecture of vast majority of these products remained the same during all these years. One just cannot easily evolve the architecture and clean-up the product while there is an enormous pressure to deliver new features. Therefore the architecture of these products still remains essentially in the state as it was originally designed in early 2000s. And it is almost impossible to change.

That was the first generation of IDM systems.

The 2000s was a very exciting time in software engineering. Nothing short of a revolution spread through the software world. The developers discovered The Network and started to use SOAP. Which lead to SOA craze. And later the new age developers disliked SOAP and created RESTful movement. XML reached its zenith and JSON became popular. The idea of object-relational mapping spread far and wide. The term NoSQL was coined. The heavyweight enterprise-oriented architectures of early 2000s were mostly abandoned and replaced by lightweight network-oriented architectures of late 2000s. And everything was suddenly moving up into the clouds.

It is obvious that the old-fashioned products that built up a decade of technological debt cannot keep up with all of this. The products started to get weaker in late 2000s. Yet only a very few people noticed that. The first-generation products gained an enormous business momentum and that simply does not go away from day to day. Anyway, in 2010 there was perhaps only a couple of practical IDM products left. The rest was too bloated, too expensive and too cumbersome to be really useful. Their owners hesitated for too long to re-engineer and re-fresh the products. But it is too late to do that now. These products needs to be replaced. And they will be replaced. Soon.

This situation is quite clear now. But it was not that clear just a few years ago. Yet several teams begun new projects in 2010 almost at the same time. Maybe that was a triggered by Oracle-Sun acquisition or maybe the time was just right to change something ... we will probably never know for sure. The projects started almost on a green field and they had an enormous effort ahead of them. But the teams went on and after several years of development there is whole new breed of IDM products. Lean, flexible, scalable and open.

This is the second generation of IDM systems.

The second-generation systems are built on the network principles. They all have lightweight and flexible architectures. And most of them are professional open source! There is ForgeRock OpenIDM with its lightweight approach and extreme flexibility. Practical Evolveum midPoint with a very rich set of features. And Apache Syncope with its vibrant and open community. These are just three notable examples of the new generation. A generation of IDM systems that has arrived right on time.

(Reposted from https://www.evolveum.com/old-idm-kings-dead-long-live-new-kings/)

CourionCourion Named a Leader in Access Governance by KuppingerCole [Technorati links]

October 14, 2014 01:38 PM

Access Risk Management Blog | Courion

Kurt JohnsonToday Courion was named a leader in the 2014 Leadership Compass for Access Governance by KuppingerCole, a global analyst firm. Courion’s Access Assurance Suite was recognized for product features and Innovation, and as a very strong offering that covers virtually all standard requirements. In the management summary of the report, Courion is highlighted as the first to deliver advanced access intelligence capabilities.KuppingerCole Leadership Compass

Courion was also recognized as a leader in the Gartner Magic Quadrant for Identity Governance and Administration (IGA) and as a leader in the KuppingerCole Leadership Compass for Identity Provisioning earlier this year.


Julian BondHere we are on the surface of this spaceship, travelling through time and space at 1 second per second... [Technorati links]

October 14, 2014 12:45 PM
Here we are on the surface of this spaceship, travelling through time and space at 1 second per second (roughly) and about 360 km/sec towards Leo.

So what I want to know is, who's in charge? Because I have to say there are some aspects of the cruise that I'm not entirely happy with.
[from: Google+ Posts]

Mike Jones - MicrosoftJOSE -34 and JWT -28 drafts addressing IESG review comments [Technorati links]

October 14, 2014 12:37 PM

IETF logoUpdated JOSE and JWT specifications have been published that address the IESG review comments received. The one set of normative changes was to change the implementation requirements for RSAES-PKCS1-V1_5 from Required to Recommended- and for RSA-OAEP from Optional to Recommended+. Thanks to Richard Barnes, Alissa Cooper, Stephen Farrell, Brian Haberman, Ted Lemon, Barry Leiba, and Pete Resnick for their IESG review comments, plus thanks to Scott Brim and Russ Housley for additional Gen-ART review comments, and thanks to the working group members who helped respond to them. Many valuable clarifications resulted from your thorough reviews.

The specifications are available at:

HTML formatted versions are available at:

Julian BondOne for the bucket list. Get yourself to a train station in Morocco in early June 2015. Get picked up... [Technorati links]

October 14, 2014 11:55 AM
One for the bucket list. Get yourself to a train station in Morocco in early June 2015. Get picked up and taken to a small village in the Rif mountains where you'll be looked after and fed by families in their homes for 3 days. The Master Musicians Of Joujouka then play their special blend of Sufi music each day and late into each night. Places limited to 50 and it's €360 all in.


There's a slightly cheaper one day version (€100) on 15-Nov-2014 tied in with the Beat Conference (17/19 Nov) in Tangier to celebrate the 100th anniversary of William Burrough's birth. The conference fee is basically B&B in the hotel plus food.
 The Quietus | News | Master Musicians Of Joujouka Festival 2015 »
Plus, one-off date in the village this November as part of William S. Burroughs centenary celebrations

[from: Google+ Posts]

Kuppinger ColeSAP enters the Cloud IAM market – the competition becomes even tougher [Technorati links]

October 14, 2014 07:28 AM
In Martin Kuppinger

The market for Cloud IAM and in particular Cloud User and Access Management – extending the reach of IAM to business partners, consumers, and Cloud applications through a Cloud service – is growing, both with respect to market size and service providers. While there were a number of start-ups (such as Ping Identity, Okta and OneLogin) creating the market, we now see more and more established players entering the field. Vendors such as Microsoft, Salesforce.com or Centrify are already in. Now SAP, one of the heavyweights in the IT market, has recently launched their SAP Cloud Identity Service.

The focus of this new service is managing access for all types of users, their authentication, and Single Sign-On, to on-premise applications, SAP Cloud applications, and 3rd party Cloud services. This includes capabilities such as SSO, user provisioning, self-registration and user invitation, and more. There is also optional support for social logins.

Technically, there is a private instance per tenant running on the SAP Cloud Identity Service, which acts as Identity Provider (IdP) for Cloud services and other SAML-ready SaaS applications, but also as an interface for external user authentication and registration. This connects back to the on-premise infrastructure for accessing SAP systems and other environments, providing also SSO for users already logged in to SAP systems.

With this new offering, SAP is becoming an interesting option in that field. While they do not sparkle with a large number of pre-configured Cloud services – some other players claim to have more than 3,000 Cloud services ready for on-boarding – SAP provides a solid conceptual approach to Cloud IAM, which is strongly tied in all the SAP HANA platform, the SAP HANA Cloud, and the on-premise SAP infrastructures.

This tight integration into SAP environments, together with the fact that SAP provides its own, certified data center infrastructure, plus the fact that it is from SAP (and SAP buyers tend to buy from SAP) makes it a strong contender in the emerging Cloud User and Access Management market.

October 13, 2014

Julian BondChina's per capita CO2 production (7.2 tonnes pa) is now higher than the EU's (6.8). It's still half... [Technorati links]

October 13, 2014 07:00 PM
China's per capita CO2 production (7.2 tonnes pa) is now higher than the EU's (6.8). It's still half the USA's (16.5) per capita figure but it's increased 4 fold since 2001.

The totals are more like China 29%, USA 15%, EU 10% of total pa CO2 production.

So now what happens?


Obviously, it's all OK, because Paul Krugman (Nobel prize winning economist and NYT columnist) says "Saving the planet would be cheap; it might even be free." with a bit of carbon tax, carbon credits and other strong measures to limit carbon emissions.
And anyone who disagrees is just indulging in "climate despair".
 World carbon emissions out of control »
I am writing another post that has been triggered by a news article, only this time it is about climate change. The headline 'China's per capita carbon emissions overtake EU's' came as a bit of a shock. 'While the per capita ...

[from: Google+ Posts]

Kuppinger Cole11.12.2014: Understand your access risks – gain insight now [Technorati links]

October 13, 2014 10:35 AM
In KuppingerCole

Access Intelligence: Enabling insight at any time – not one year after, when recertifying again Imagine you have less work and better risk mitigation in your Access Governance program. What sounds hard to achieve can become reality, by complementing traditional approaches of Access Governance with Access Intelligence: Analytics that support identifying the biggest risks, simple, quick, at any time. Knowing the risks helps in mitigating these, by running ad hoc recertification only for these...

Kuppinger ColeAdvisory Note: Security Organization, Governance, and the Cloud - 71151 [Technorati links]

October 13, 2014 10:14 AM
In KuppingerCole

The cloud provides an alternative way of obtaining IT services that offers many benefits including increased flexibility as well as reduced cost.  This document provides an overview of the approach that enables an organization to securely and reliably use cloud services to achieve business objectives.


Kuppinger ColeAdvisory Note: Maturity Level Matrixes for Identity and Access Management/Governance - 70738 [Technorati links]

October 13, 2014 09:31 AM
In KuppingerCole

KuppingerCole Maturity Level Matrixes for the major market segments within IAM (Identity and Access Management) and IAG (Identity and Access Governance). Foundation for rating the current state of your IAM/IAG projects and programs.

October 12, 2014

Anil JohnWhat Is the Role of Transaction Risk in Identity Assurance? [Technorati links]

October 12, 2014 01:30 PM

Identity assurance is a measure of the needed level of confidence in an identity to mitigate the consequences of authentication errors and misuse of credentials. As consequences become more serious, so does the required level of assurance. Given that it typically has not taken transaction specific data as input, how should digital services factor in transaction risk into their identity assurance models?

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

October 09, 2014

ForgeRockTaking heart about the future of identity [Technorati links]

October 09, 2014 06:09 PM

By Eve Maler, ForgeRock VP of Innovation and Emerging Technology

A lot of us doing digital identity know that innovation in this space comes in fits and starts. There have been times when the twice-yearly Internet Identity Workshops felt like exercises in marking time: Okay, if InfoCard isn’t quite it for consented attribute sharing, what’s the answer? And what do you mean everyone doesn’t yet have a server under their desks at home running an OpenID 2.0 identity provider?

At other times, there’s excitement in the air because we feel like we’re on to something big, and if we just align our polarities in the right way, we can really get somewhere. It’s out of these moments that OpenID Connect (a merger of ideas from Facebook and OpenID Artifact Binding folks) — for one example — was born.

I for one am feeling that excitement now. There’s a confluence of factors:

The “new Venn of access control” that I talked about at ForgeRock’s Identity Relationship Management Summit in June is coming — and all of us practitioners have a chance to make dramatic progress if we can just…align.

There are a couple of key opportunities to do that in the short term: IIW XIX in Mountain View in three weeks, and the IRM Summit in Dublin — along with a Kantara workshop on the trusted identity exchange and the “age of IRM agility” — in four.

If you join me at these venues, you can catch up on important User-Managed Access (UMA) progress, and also hear about — and maybe get involved in — an exciting new group that Debbie Bucci of HHS ONC and I are working to spin up at the OpenID Foundation: the HEAlth Relationship Trust Work Group. The HEART WG is all about profiling the “Venn” technologies of OAuth, OpenID Connect, and UMA to ensure that patient-centric health data sharing is secure, consented, and interoperable the world over. (If you’re US-based like me and have visited a doctor lately, you’ve probably been onboarding to a lot of electronic health record systems — how would you like to help ensure that these systems are full participants in the 21st century? Amirite?)

See you there!

- Eve (@xmlgrrl)


The post Taking heart about the future of identity appeared first on ForgeRock.

Kuppinger ColeLeadership Compass: Access Governance - 70948 [Technorati links]

October 09, 2014 12:20 PM
In KuppingerCole

Leaders in innovation, product features, and market reach for Identity and Access Governance and Access Intelligence. Your compass for finding the right path in the market.