November 21, 2014

Kuppinger ColeSAP Security Made Easy. How to Keep Your SAP Systems Secure [Technorati links]

November 21, 2014 10:37 AM
In KuppingerCole Podcasts

Security in SAP environments is a key requirement of SAP customers. SAP systems are business critical. They must run reliably, they must remain secure – despite a growing number of attacks. There are various levels of security to enforce in SAP environments. It is not only about user management, access controls, or code security. It is about integrated approaches.

Watch online

Nat SakimuraIDMからIRMへ~変わるアイデンティティーの地平 [Technorati links]

November 21, 2014 06:00 AM




November 20, 2014

Ian GlazerNo Person is an Island: How Relationships Make Things Better [Technorati links]

November 20, 2014 05:26 PM

(The basic text to my talk at Defragcon 2014. The slides I used are at the end of this post and if they don’t show up you can get them here.)

What have we done to manage people, their “things,” and how they interact with organizations?

The sad truth that we tried to treat the outside world of our customers and partners, like the inside world of employees. And we’ve done poorly at both. I mean, think about, “Treat your customers like you treat your employees” is rarely a winning strategy. If it was, just imagine the Successories you’d have to buy for your customers… on second thought, don’t do that.

We started by storing people as rows in a database. Rows and rows of people. But treating people like just a row in a database is, essentially, sociopathic behavior. It ignores the reality that you, your organization, and the other person, group, or organization are connected. We made every row, every person an island – disconnected from ourselves.

What else did we try? In the world of identity and access management we started storing people as nodes in an LDAP tree. We created an artificial hierarchy and stuff people, our customers, into it. Hierarchies and our love for them is the strange lovechild of Confucius and the military industrial complex. Putting people into these false hierarchies doesn’t help us delight our customers. And it doesn’t really help make management tasks any easier. We made every node, every person, an island – disconnected from ourselves.

We tried other things realizing that those two left something to be desired. We tried roles. You have this role and we can treat you as such. You have that role and we should treat you like this. But how many people actually do what their job title says? How many people actually meaningful job titles? And whose customers come with job titles? So, needless to say, roles didn’t work as planned in most cases.

We knew this wasn’t going to work. We’ve known since 1623. John Donne told us as much. And his words then are more relevant now than he could have possibly imagined then. Apologies to every English teacher I have ever had as I rework Donne’s words:

No one is an island, entire of itself; everyone is a piece of the continent, a part of the main. If a clod be washed away by the sea, we are the less. Anyone’s death diminishes us, because we are involved in the connected world.

What should we do?

If treating our customers like employees isn’t a winning strategy, if making an island out of each of our customers won’t work, if we are involved with the connected world, then what should we do?

We have to acknowledge that relationships exist. We have to acknowledge that the connections exists between a customer, their devices and things, and us. No matter what business you are in. No matter if you are a one woman IT consulting shop, or two-guys and a letterpress on Etsy, or even a multi-national corporation – you are connected to your customers; you have a relationship with them.

This isn’t necessarily a new thought and, in fact, there are two disciplines that have sought to map and use those relationships: CRM and VRM. Customer relationship management models one organization to many people. Vendor relationship management models one person to many organizations. Both, unknowingly share an important truth – the connections between people and organizations are key. It’s not “CRM vs VRM;” it’s “CRM and VRM.” What I am proposing is the notion of IRM – identity relationship management. IRM puts the relationships front and center, but more on that in a minute.

I believe that acknowledging relationships re-humanizes our digital relationships with one another. I believe that this is one of the reasons why online forums descend into antisocial behavior. It’s because those systems don’t make you feel like you have a relationship with the other party. “There’s no person there, just a tweet.” And this is a shame – that platforms meant to provide scalable human-to-human interactions and contact and closeness often dehumanize those very interactions.

I believe that we ought to use relationships to manage our interactions. You can’t get delighted customers by just treating them like a row in a database. You cannot manage data from all of your customer’s “things” without fully recognizing there’s a customer there with whom you have a relationship.

What I know about relationships

I believe we must build “relationship-literate” systems and processes. We should stop operating on rows of customers and start using digital representations of relationships. What follows are nine aspects of relationships that can serve as design considerations for relationship-literate systems.


If we are going to use relationships as a management tool in this world of ever-increasing connections between people, their things, and organizations, then we have to tackle scalability issues. The three obvious ones are huge numbers of actors, attributes, and relationships. But there’s another that is often left out: administration. If we don’t do something better than we do today, we’ll be stuck with the drop-list from hell in which an admin has to scroll through a few thousand enteries to find the “thing” she wants to manager.


I’ve got to know I’m in a relationship before anything else can meaningfully happen. I can’t buy a one-sided birthday card: Happy birthday to a super awesome partner who doesn’t know who I am. All parties have to know. Otherwise there is an asymmetry of power. And that tends to tilt towards the heavier object, e.g. the organization and not the individual. Familiar with the Law of Gross Tonnage? It’s part of the maritime code that says the heavier ship has the right of way. Now growing up outside of Boston, this is basically how I learned to drive. The Law of Gross Tonnage is useful in that situation but absolutely inequitable and unhelpful in terms of delighting a customer.


There’s got to be a way for us to know if multiple parties are in a relationship. This can take many flavors: single party, multi-party, and 3rd party asserted. Things like Facebook can serve as that 3rd party vouching two people are connected. But should there be alternatives to social networks for this? And who connects people and their “things”?


We want our relationships to be able to do something. And by looking at the relationship each party can know what they can do. Without having to consult some distant authority. Without waiting for an online connection. The relationship leads to action and does so without consulting some back-end service somewhere.


Not just because a relationship can do something doesn’t mean it can do everything. We need to be able to put limits of what things and people can do; we all need constraints. Examples of this are things like granting consent or enforcing digital rights management.


Some things are in a relationship forever. This is useful to know when you want to make sure that a “thing” was really made by one of your partners and is authentic.


Some relationships can be transferred. We have legal proxies that we transfer a relationship to on a temporary or conditional basis. There are plenty of familial relationships in which we transfer authority on a semi-permanent basis. And some relationships are permanently transferred – like selling a jet engine to someone.


Many relationships exist but aren’t very useful, until a condition changes. My relationship to my auto insurance provider isn’t a very vibrant relationship. I don’t use the relationship on most days. But then I get into an accident that inert relationship between my car, the insurer, and me becomes active. There’s something out there, some condition out there, that can make a relationship active and vital.


Some relationships end or have to come to an end. What happens then? What happens to the data now that the relationship is gone? At this point we have to turn to renowned privacy expert, John Mellencamp for his insight. You might not know it but he wrote about the Right to Be Forgotten and other privacy issues in “Jack and Diane”. As he sang, “oh yeah data goes on / long after the thrill of the relationship is gone.” But this problem is at the root of the “Right to Be Forgotten” debate. This will only become a larger problem as our digital footprints get heavier and heavier. And this gets especially messy when relationships that I am not even aware of create data about me and my devices and my things.

In summary, relationships:

If we were to do this, how would things be better?

Relationships add back the fidelity and color that we have drained from the digital identity world. By focusing on relationships, we would behave more like we do in the real world, but with all the efficiencies of the digital world. We’d be able to use familiar language to describe how and what people and things can do.

How should we do this?

I don’t fully know. This is the least satisfying and most accurate thought in this whole talk. I don’t fully know. And I am looking for help.

So I lied to you dear audience. This is a sales pitch. I want you to do something. If you have any interest in this vague notion of relationships and using them to make our world better, then I ask you to join the Kantara Initiative. It’s free to join and free to participate. It’s the home of some amazing identity and IoT thinking. And we need your help. I’d like you to join the Identity Relationship Management working group. I’d love it if you could bring your use cases to us. Share with a group of awesome people from around the world how you, your business, your service, your things connect and relate. Help us stop treating people like islands unto themselves. Help us to use relationships to make our digital interactions rich, meaningful, humanizing, and manageable.

No Person is an Island: How Relationships Make Things Better from iglazer

Radovan Semančík - nLightNever Use Closed-Source IAM Again [Technorati links]

November 20, 2014 03:45 PM

I will never use any closed-source IAM again. You will have to use force to persuade me to do it. I'm not making this statement lightly. I was working with closed-source IAM systems for the better part of 2000s and it made quite a good living. But I'm not going to do that again. Never ever.

What's so bad about closed-source IAM? It is the very fact that it is closed. A deployment engineer cannot see inside it. Therefore the engineer has inherently limited possibilities. No documentation is ever perfect and no documentation ever describes the system well enough. Therefore the deployment engineer is also likely to have limited understanding of the system. And engineer that does not understands what he is doing is unlikely to do a good job.

Closed-source software also leads to vendor lock-in. That makes it unbelievably expensive in the end. The Sun-Oracle acquisition of 2010 clearly demonstrated the impact of vendor lock-in for me. Our company was a very successful Sun partner in 2000s. But we have almost gone out of business because of that acquisition and the events that followed. That was the moment when I have realized that this must never happen again.

Open source is the obvious alternative. But how good it really is? Can it actually replace closed-source software? The short answer is a clear and loud "Yes!". The situation might have been quite bad in 2000s. But now there is a lot of viable open source alternatives for every IAM component. Directory servers, simple SSO, comprehensive SSO, social login and federation, identity management, RBAC and privileges and so on. There is plenty to choose from. Most of these projects are in a very good and stable state. They are at least as good as closed-source software.

But what is so great about open source software? It makes no sense to switch to open source just because of some philosophically-metaphysical differences, does it? So where are the tangible benefits? Simply speaking there are huge advantages to open source software all around you. But they might not be exactly what you expect.

Contrary to the popular belief the ability to meddle with the source code does not bring any significant direct advantage to the end customer. The customers are unlikely to even see the source code let alone modify it. But this ability brings a huge advantage to a system integrator who deploys the software. The deployment engineers do not need vendor assistance with every deployment step. The source code is the ultimate documentation therefore the deployment engineers can work almost independently. This eliminates the need for hugely overpriced vendor professional services - which also reduces the cost of the entire solution. The deployment engineers can fix product bugs themselves and submit the fixes back to the vendor. Which significantly speeds up the project. Any competent engineer can fix a simple bug in a couple of days if he has the source code. He or she does not need to raise each and every trivial issue and fight the way through all the levels of bloated support organization. And then wait for weeks or months to get the answer from the vendors development team. The open source way is so much more efficient. This dramatically reduces the deployment time and also the overall deployment cost.

The source code also allows ultimate customization. Software architects know very well how difficult it is to design and implement good extensible system. As with many other things it is actually very easy to do it badly but it is extremely difficult to do it well. A system which has all the extensibility that the IAM needs would inevitably become extremely complicated. Therefore the best way how to customize a system is sometimes the simple modification of the source code. And this is only possible in open source projects. Oh yes, there is this tricky upgradeablity problem. Customizations are difficult to upgrade, right? Right. Customized closed-source software is usually very difficult to upgrade. But that does not necessarily applies to well-managed open source projects. Distributed source code control software such as Git makes this kind of customization feasible. We are using this method for years and it survived many upgrades already.

But perhaps the most important advantage is the lack of vendor lock-in. The source code of open source project does not "belong" to any single individual or company. If the product is good there will be many open source companies that can offer services that only a single closed-source vendor can provide. This creates a healthy competition. In the extreme case the partner can always take over the product maintenance if the vendor misbehaves. Therefore it is unlikely that the cost of the open source solution spins out of control. Open source also provides much better protection against vendor failure. Yes, I'm aware that many companies behind the open source projects are small and that they can easily fail. But in the open source world the company failure does not necessarily mean project failure. If the project is any good then it will continue even if the original maintainer fails. Other companies will take over, most likely by employing at least a part of the original engineers. And the project goes on. This is the ultimate business continuity guarantee. And it has happened several times already. On the other hand the failure (or acquisition) of a closed source vendor is often fatal for the project. This has also happened several times. And we still feel the consequences today.

The difference between open-source and closed-source world is enormous. Any engineer that ever goes there and understands open source is very unlikely to go back. Open source is much easier to work with. The engineers have the power to change what they do not like. Open source is much more cost efficient and the business model is sustainable. And it actually works!

Therefore I would never ever use closed-source IAM again.

(Reposted from

Kaliya Hamlin - Identity WomanProtected: Dear IDESG, I’m sorry. I didn’t call you Nazi’s. [Technorati links]

November 20, 2014 02:18 PM

This content is password protected. To view it please enter your password below:

IS4UFIM 2010: Event driven scheduling [Technorati links]

November 20, 2014 12:25 PM
In a previous post I described how I implemented a windows service for scheduling Forefront Identity Manager.

Since then, me and my colleagues used it in every FIM project. For one project I was asked if it was possible to trigger the synchronization "on demand". A specific trigger for a synchronization cycle for example, was the creation of a user in the FIM portal. After some brainstorming and Googling, we came up with a solution.

We asked ourselves following question: "Is it possible to send a signal to our existing Windows service to start a synchronization cycle?". All the functionality for scheduling was already there, so it seemed reasonable to investigate and explore this option. As it turns out, it is possible to send a signal to a Windows service and the implementation turned out to be very simple (and simple is good, right?).

In addition to the scheduling on predefined moments defined in the job configuration file, which is implemented through the Quartz framework, we started an extra thread:

while (true)
 if (scheduler.GetCurrentlyExecutingJobs().Count == 0 
  && !paused)
  if (DateTime.Compare(StartSignal, LastEndTime) > 0)
   running = true;
   StartSignal = DateTime.Now;
   LastEndTime = StartSignal;
   SchedulerConfig schedulerConfig = 
      new SchedulerConfig(runConfigurationFile);
   if (schedulerConfig != null)
    logger.Error("Scheduler configuration not found.");
    throw new JobExecutionException
        ("Scheduler configuration not found.");
   running = false;
 // 5 second delay
First thing it does is check if one of the time-triggered schedules is not running and the service is not paused. Then it checks to see if an on-demand trigger was received by checking the StartSignal timestamp. So as you can see, the StartSignal timestamp is the one controlling the action. If the service receives a signal to start a synchronization schedule, it simply sets the StartSignal parameter:

protected override void OnCustomCommand(int command)
 if (command == ONDEMAND)
  StartSignal = DateTime.Now;

If you want to know more about developing custom activities, this article is a good starting point.

The first thing it does next if a signal was received, is pause the time-triggered mechanism. If the synchronization cycle finishes the time-triggered scheduling is resumed. The beautiful thing about this way of working is that the two separate mechanisms work alongside each other. The time-triggered schedule is not fired if an on-demand schedule is running and vice versa. If a signal was sent during a period of time the service was paused, the on-demand schedule will fire as soon as the service is resumed. The StartSignal timestamp will take care of that.
So, how do you send a signal to this service, you ask? This is also fairly straightforward. I implemented the FIM portal scenario I described above by implementing a custom C# workflow with a single code activity:

using System.ServiceProcess;
private const int OnDemand = 234;
private void startSync(){
 ServiceController is4uScheduler = 
  new ServiceController("IS4UFIMScheduler");

If you want to know more about developing custom activities, this article is a good starting point.
The integer value is arbitrary. You only need to make sure you send the same value as is defined in the service source code. The ServiceController takes the system name of the Windows service. The same is possible in Powershell:

  Version=, Culture=neutral, 
$is4uScheduler = New-Object System.ServiceProcess.ServiceController
$is4uScheduler.Name = "IS4UFIMScheduler"

Another extension I implemented (inspired by Dave Nesbitt's question on my previous post) was the delay step. This kind of step allows you to insert a window of time between two management agent runs. This in addition to the default delay, which is inserted between every step. So now there are four kind of steps possible in the run configuration file: LinearSequence, ParallelSequence, ManagementAgent and Delay. I saw the same idea being implemented in powershell here.

A very usefull function I didn't mention in my previous post, but was already there, is the cleanup of the run history (which can become very big in a fast-synchronizing FIM deployment). This function can be enabled by setting the option "ClearRunHistory" to true and setting the number of days in the "KeepHistory" option. If you enable this option, you need to make sure the service account running the service is a member of the FIM Sync Admins security group. If you do not use this option, membership of the FIM Sync Operators group is sufficient.

To end I would like to give you pointers to some other existing schedulers for FIM:
FIM 2010: How to Automate Sync Engine Run Profile Execution

GluuOAuth2 for IOT? [Technorati links]

November 20, 2014 03:07 AM


Today, consumers have no way to centrally manage access to all their Web stuff and IOT devices are threatening to create a whole new silo of security problems. This is one of the reasons I’ve been participating in the Open InterConnect Consortium Security Task Group.

People can’t individually manage every IOT device in their house. So it seems likely that some kind of centralized management tools will be necessary. Last week, I proposed the use of OAuth2 profiles OpenID Connect and UMA as the “two legs” of IOT security. Since then, a discussion has been active on the feasibility of OAuth2 for IOT?

One challenge for this design is that OAuth2 relies on HTTPS for transport security. While many devices will be powerful enough to handle an HTTPS connection, some devices are too small. Says Justin Richer @zer0n1ne from Mitre, “Basically replacing HTTP with CoAP and TLS with DTLS, you get a lot of functional equivalence.” In fact this effort is already in progress at the IETF, and research projects are in progress to build this out in simulation. For more info see OAuth 2.0 Bearer Token Usage over the Constrained Application, OAuth 2.0 Internet of Things (IoT) Client Credentials Grant, and Ace Working Group’s draft standard for Object Security for CoAP

Assuming the transport layer security gets solved, another sticking point is the idea of central control. Here is the case against central control paraphrased by one of my comrades:

If you buy a light switch and a light bulb, they need to magically work together. When we state this as almost impossible, they will accept that the user needs a smartphone for the initial setup but not that he needs some extra dedicated authorization server. (Nor do I think that running this in the cloud will be acceptable either.)

Let’s consider the use case: How could an IOT light bulb connect to an IOT light switch.

Lets say the light bulb publishes three APIs:

For central control, using the UMA profile of OAuth2, a client must present a valid RPT token from an Authorization Server to the light bulb. All the light bulb has to do is validate this token. This should be the default configuration for most IOT devices–they should quickly hook into the existing home security infrastructure with very little effort from IOT developers. There is no need for the light bulb to store or evaluate policies with this solution. I disagree that the cloud won’t be a likely place to manage your digital resources (what don’t we use Google for these days). The home router might also be a handy place to have your home policy decision point.

But what if there is no central UMA authorization server? Is there a need for an alternate method of local authorization? Yes! The light bulb is hte resource server, and it can always have some backup policies, or example a USB connection (or button), could bypass UMA authorization.

For the light switch to make this call to the APIs, it would need a client credentials. The light bulb itself could have a tiny OAuth2 chip that would provide the bare minimum server APIs for client discovery, client authentication, and dynamic client registration.

The light bulb can offer a few different ways for the light switch to “authenticate” depending on how fancy it is:
1) None (sometimes you’re on a trusted network)
2) API key / secret
3) JSON Web Key

In cases where the light bulb was not configured to use central authentication, it could check the access token against its cache of tokens issued to local clients.

OpenID Connect offers lots of features for client registration. For example, you could correlate client registrations with “request_uris.” (Think entityID if you are familiar with SAML). See the registration request section of the OpenID Connect Dynamic Client Registration Spec

Why write a new OAuth2 based client authentication protocol when we already have OpenID Connect? Connect has been shown to be usable by developers, was designed to make simple things simple, and scales to complex requirements. Wouldn’t it make sense to just create a mapping for a new transport layer? Won’t there be even more transport layers in the future? What about secure-Bluetooth, secure-NFC, or secure-ESP? Will we have to re-invent client registration every time there is a new secure transport layer?

If the Open Interconnect Consortium Core Framework TG decides to mandate support for CoAP, then it may not be possible to use OpenID Connect, UMA or any other existing security protocol developed for HTTP.

Says Eve Maler, VP Innovation & Emerging Technology at ForgeRock, “My suspicion has been that a CoAP binding of UMA would be an interesting and worthwhile project… could be done through the UMA extensibility profiles now–basically replacing the HTTP parts of UMA with CoAP parts”

Nat Sakimura, Chairman of the OpenID Foundation, commented “binding to other transport protocols, definitely yes. That was our intention from the beginning. That’s why we abstracted it. Defining a binding to CoAP etc. would be a good starting point. In the ACE Working Group at the IETF, Hannes Tschofenig from ARM has already started the work.”

Mike Jones - MicrosoftJOSE -37 and JWT -31 drafts addressing remaining IESG review comments [Technorati links]

November 20, 2014 01:19 AM

IETF logoThese JOSE and JWT drafts contain updates intended to address the remaining outstanding IESG review comments by Pete Resnick, Stephen Farrell, and Richard Barnes, other than one that Pete may still provide text for. Algorithm names are now restricted to using only ASCII characters, the TLS requirements language has been refined, the language about integrity protecting header parameters used in trust decisions has been augmented, we now say what to do when an RSA private key with “oth” is encountered but not supported, and we now talk about JWSs with invalid signatures being considered invalid, rather than them being rejected. Also, added the CRT parameter values to example JWK RSA private key representations.

The specifications are available at:

HTML formatted versions are available at:

November 19, 2014

Matt Pollicove - CTISome thoughts on database locking in Oracle and Microsoft SQL Server [Technorati links]

November 19, 2014 06:29 PM

Deadlocks are the bane of those of us responsible for designing and maintaining any type of database system. I’ve written about these before on the dispatcher level. However this time around, I’d like to discuss them a little further “down” so to speak, at the database level. Also in talking to various people about this topic I've found that it’s potentially the most divisive question since “Tastes good vs. Less filling

Database deadlocks are much like application ones, typically come when two processes are trying to access the same database row at the same time. Most often this is when the system is trying to read and write to the row at the same time. A nice explanation can be found here. What we essentially wind up with is the database equivalent of a traffic jam where no one can move. It’s interesting to note that both Oracle and Microsoft SQL server handle these locking scenarios differently. I’m not going to go into DB2 at the moment but will address it if there is sufficient demand.

When dealing with SQL Server, management of locks is handled through the use of the “Hint” called No Lock. According to MSDN:

Hints are options or strategies specified for enforcement by the SQL Server query processor on SELECT, INSERT, UPDATE, or DELETE statements. The hints override any execution plan the query optimizer might select for a query. (Source)
When NOLOCK is used this is the same as using READUNCOMMITTED which some of you might have be familiar with if you did the NetWeaver portion of the IDM install when setting up the data source. Using this option keeps the SQL Server database engine from issuing locks. The big issue here is that one runs the risk of having dirty (old) data in the database operations. Be careful when using NOLOCK for this reason. Even though the SAP Provisioning Framework makes extensive use of the NOLOCK functionality, they regression test the heck out of the configuration. Make sure you do, too misuse of NOLOCK can lead to bad things happening in the Identity Store database.

There is also a piece of SQL Server functionality referred to as Snapshot Isolation which appears to work as a NOLOCK writ large where database snapshots are held in the TEMPDB for processing (source) This functionality was recommended by a DBA I worked with on a project some time ago. The functionality was tested in DEV and then rolled to the customer’s PRODUCTION instance.

Oracle is a little different in the way that it approaches locking in that the system has more internal management of conflicts through use of rollback logs forcing data to be committed before writes can occur and thus deadlocks occur much less often (Source) This means that there is no similar NOLOCK functionality in the Oracle Database System.

One final thing to consider with database deadlocks is how the database is being accessed, regardless of the database being used.  It is considered a best practice in SAP IDM to use To Identity Store passes as opposed to uIS_SetValue whenever possible (Source)

At the end of the day, I don’t know that I can really tell you to employ these mechanisms or not. In general we do know that it’s better not to have deadlocks than to have them and to do what you can to achieve this goal. In general, if you are going to use these techniques, do make sure you are doing so in concert with your DBA team and after careful testing. I have seen Microsoft SQL Server’s Snapshot Isolation work well in a busy productive environment, but I will not recommend its universal adoption as I can’t tell you how well it will work in yourenvironment. I will however recommend that you look into it with your DBA team if you are experiencing Deadlocks in SQL Server.

Kuppinger ColeDatabase Security On and Off the Cloud [Technorati links]

November 19, 2014 11:05 AM
In KuppingerCole Podcasts

Continued proliferation of cloud technologies offering on-demand scalability, flexibility and substantial cost savings means that more and more organizations are considering moving their applications and databases to IaaS or PaaS environments. However, migrating sensitive corporate data to a 3rd party infrastructure brings with it a number of new security and compliance challenges that enterprise IT has to address. Developing a comprehensive security strategy and avoiding point solutions for ...

Watch online

Vittorio Bertocci - MicrosoftFrom Domain to TenantID [Technorati links]

November 19, 2014 06:03 AM

Ha, I discovered that I kind of like to write short posts Smile so here there’s another one.

Azure AD endpoints can be constructed with both domain and tenantID interchangeably, “” and “” are functionally equivalent – however the tenantID has some clear advantages. For example: it is immutable, globally unique and non-reassignable, while domains do indeed change hands on occasions. Moreover, you can have many domains associated to a tenant but only one tenantID. Really, the only thing that the domain has going for itself is that it is human readable and there’s a reasonable chance a user can remember and type it.

Per the above, there are times in which it can come in useful to find out the TenantID for a given domain. The trick is reeeeeally simple. You can use the domain to construct one of the AAD endpoints which return tenant metadata, for example the OpenId Connect one; such metadata will contain the tenantID. In practice: say that you know that the target domain is How do I find out the corresponding tenantID, without even being authenticated?

Easy. I do a GET of

The result is a JSON file that has the tenantID all over it:

   "authorization_endpoint" : "",
   "check_session_iframe" : "",
   "end_session_endpoint" : "",
   "id_token_signing_alg_values_supported" : [ "RS256" ],
   "issuer" : "",
   "jwks_uri" : "",
   "microsoft_multi_refresh_token" : true,
   "response_modes_supported" : [ "query", "fragment", "form_post" ],
   "response_types_supported" : [ "code", "id_token", "code id_token", "token" ],
   "scopes_supported" : [ "openid" ],
   "subject_types_supported" : [ "pairwise" ],
   "token_endpoint" : "",
   "token_endpoint_auth_methods_supported" : [ "client_secret_post", "private_key_jwt" ],
   "userinfo_endpoint" : ""

Whip out your favorite JSON parsing class, and you’re done. Ta—dahh ♫

Kantara InitiativeEuropean Workshop on Trust & Identity [Technorati links]

November 19, 2014 03:02 AM

For those who are in the EU or will be near Vienna, Austria.  You may wish to attend the European Workshop on Trust and Identity to discuss “Connecting Identity Management Initiatives.” This is an openspace workshop where attendees will have the opportunity to network and share with others.


Openspace workshops have been finding and solving trust and identity issues for years. Starting in 2013 EWTI made this format available in Europe and received excellent feedback from participants. If you are looking for a substantial discussion on this subject it is likely that you will meet the right people here!

Meet at the EU Identity Workshop in Vienna 2014

EWTI is the opportunity to discuss, share knowledge, and learn about everything related to Internet Trust and Identity today. Topics at the EWTI in 2013 included:
  • Gov/Academic/Social ID
  • How to use SAML with REST and SOAP
  • eID in your country: where is it today, where is it heading?
  • SLO Single Logout for SAML & OAuth
  • STORK – existing federations user cases, interoperability
  • Binding LoA attributes to social ids (non-technical – strategy)
  • NSTIC: impressions, feedback, relation to other world-wide projects
  • Banks and Telcos as strong Identity Providers in Finland (Business model)
  • Trust and Market for Personal Data: Privacy – How to re-establish trust?
  • Trust Frameworks beyond Secotrs: Release of attributes, LOA
  • Authorization in SAML federations
  • Scaleable & comprehensive attributes design (authN & authZ)
  • E-Mail as global identifier: embrace/defend/fight it?
  • eID and Government stuff
  • Metadata exchange session: Federations at scaleSCIM 101
  • Rich-clients for mobile devices
  • Step up AuthN as a Service
  • Is SPML dead – who uses SCIM?
  • SAML2 test tool
  • All identities are self asserted
  • de-/provisioning / federated notification
  • Biobank Cloud Security
November 17, 2014

Vittorio Bertocci - MicrosoftSkipping the Home Realm Discovery Page in Azure AD [Technorati links]

November 17, 2014 04:43 PM

A typical authentication transaction with Azure AD will open with a  generic credential gathering page. As the user enters his/her username, Azure AD figures out from the domain portion of the username if the actual credential gathering should take place elsewhere (for example, if the domain is associated with a federated tenant the actual cred gathering will happen on the associated ADFS pages) and if it’s the case it will redirect accordingly.

Sometimes your app logic is such that you know in advance whether such transfer should happen. In those situations you have the opportunity to let our libraries (ADAL or the OWIN middlewares for OpenId Connect/WS-Federation) know where to go right from the start.

In OAuth2 and OpenId Connect you do so by passing the target domain in the “domain_hint” parameter.
In ADAL you can pass it via the following:

AuthenticationResult ar =
                    new Uri("http://any"), PromptBehavior.Always, 
                    UserIdentifier.AnyUser, "");


In the OWIN middleware for OpenId Connect you can do the same in the RedirectToIdentityProvider notification:

    new OpenIdConnectAuthenticationOptions
        ClientId = clientId,
        Authority = authority,
        PostLogoutRedirectUri = postLogoutRedirectUri,
        Notifications = new OpenIdConnectAuthenticationNotifications()
            RedirectToIdentityProvider = (context) => 
                context.ProtocolMessage.DomainHint = ""; 
                return Task.FromResult(0); 


Finally, in WS-Fed you do the following:

   new WsFederationAuthenticationOptions
      Notifications = new WsFederationAuthenticationNotifications
         RedirectToIdentityProvider = (context) =>
            context.ProtocolMessage.Whr = "";
            return Task.FromResult(0);

Party on! Smile

Kuppinger ColeAdvisory Note: Security and the Internet of Everything and Everyone - 71152 [Technorati links]

November 17, 2014 03:06 PM
In KuppingerCole

The vision for the Internet of Everything and Everyone is for more than just an Internet of Things; it makes bold promises for the individual as well as for businesses. However the realization of this vision is based on existing systems and infrastructure which contain known weaknesses.

November 16, 2014

Anil JohnRFI - EMV Enabled Debit Cards as Authentication Tokens? [Technorati links]

November 16, 2014 08:55 PM

The U.S. is finally moving to EMV compliant payment cards. Can these cards be used as multi-factor authentication tokens for electronic transactions outside the payment realm? What are the security and privacy implications? Who needs to buy into and be in the transaction loop to even consider this as a possibility?

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

The opinions expressed here are my own and do not represent my employer’s view in any way.

November 14, 2014

CourionFinancial Services Ready to Embrace Identity and Access intelligence [Technorati links]

November 14, 2014 02:32 PM

Access Risk Management Blog | Courion

Nick BerentsThis week at London’s Hotel Russell, the Identity Management 2014 conference brought together hundreds of technology professionals and security specialists across government and enterprises of all sizes and industries.

It was fascinating to hear from industry leaders discussing the next generation of Identity and Access Management, representing diverse firms and organizations such as ISACA, Visa Europe, Ping Identity, CyberArk, and beverage giant SABMiller.Identity Management 2014 London

A highlight for me was a session that included Nick Taylor, Director of IAM at Deloitte, and Andrew Bennett, CTO of global private bank Kleinwort Benson.

Taylor discussed the challenges that IAM professionals face in making access governance reviews business friendly, as often there is not enough context to understand the risks that they face. For example, an equities trader making lots of trades at a certain time of the day may be normal, but maybe not so normal if that trader is doing it from different locations or geographies.

Bennett supported that notion by pointing out that technical jargon can mask risk that exists, so he recommended that the financial services industry look into the concept of identity and access intelligence and start taking it on now. Adopting such a solution is not a case of throwing more tools at the problem; it is a matter of having the right tool to make sense of the mess.

Also good to hear our partner Ping Identity's session “It’s Not About the Device – It’s All About the Standards” and how modern identity protocols allow the differentiation of business & personal identities.

Overall a good conference that provided attendees with lots of opportunity to learn best practices and hear how their colleagues are approaching identity management. But rather than waiting for next year’s conference, anyone can learn more in the near term by attending Courion’s upcoming webinar Data Breach - Top Tips to Protect, Detect and Deter on Thursday November 20th at 11 a.m. ET, 8 a.m. PT, 4 p.m. GMT.

Ludovic Poitou - ForgeRockThe new ForgeRock Community site [Technorati links]

November 14, 2014 10:55 AM

Earlier this week, a new major version of ForgeRock Community site was pushed to production.

Beside a cleaner look and feel and a long awaited reorganisation of content, the new version enables better collaboration around the open source projects and initiatives. You will find Forums, for general discussions or project specific ones, new Groups around specific topics like UMA or IoT. We’ve also added a calendar with different views, so that you can find or suggest events, conferences, webinars touching the projects and IRM at large.
Great work Aron and Marius for the new site ! Thank you.

Venn Of Authorization with UMAAnd we’ve also announced a new project OpenUMA. If you haven’t paid attention to it yet, I suggest you do now. User-Managed Access (UMA) is an OAuth-based protocol that enables an individual to control the authorization of data sharing and service access made by others. The OpenUMA community shares an interest in informing, improving, and extending the development of UMA-compatible open-source software as part of ForgeRock’s Open Identity Stack.


Filed under: General Tagged: collaboration, community, ForgeRock,, identity, opensource, projects
November 13, 2014

Julian BondThe lights are going out in Syria. Literally. [Technorati links]

November 13, 2014 06:48 PM
The lights are going out in Syria. Literally.
 The Olduvai cliff: are the lights going out already? »
Image from Li and Li, "international journal of remote sensing." h/t Colonel Cassad". The image shows the nighttime light pattern in Syria three years ago (a) and today (b). Those among us who are diehard catastrophists surel...

[from: Google+ Posts]
November 12, 2014

Kuppinger Cole16.12.2014: Secure Mobile Information Sharing: addressing enterprise mobility challenges in an open, connected business [Technorati links]

November 12, 2014 02:44 PM
In KuppingerCole

Fuelled by the exponentially growing number of mobile devices, as well as by increasing adoption of cloud services, demand for various technologies that enable sharing information securely within organizations, as well as across their boundaries, has significantly surged. This demand is no longer driven by IT; on the contrary, organizations are actively looking for solutions for their business needs.
November 11, 2014

Nat SakimuraXACML v3.0 Privacy Policy Profile Version 1.0 パブリック・レビュー [Technorati links]

November 11, 2014 09:05 PM

eXtensible Access Control Markup Language (XACML) のCommittee Specification Draft (CSD) の15日間のパブリックレビューピリオドが、11/12から始まります。


期間は11/12 0:00 UTC ~11/26 23:59 UTCです。


Editable source (Authoritative):


HTML with inline tags for direct commenting:




送信された全てのコメントは、OASIS Feedback Licenseによって提出されたとみなされます。詳しくは以下の[3][4]をご参照ください。

========== Additional references:

[1] OASIS eXtensible Access Control Markup Language (XACML) TC

[2] Previous public reviews:

* 15-day public review, 23 May 2014:

* 60-day public review, 21 May 2009:


RF on Limited Terms Mode

Kaliya Hamlin - Identity WomanQuotes from Amelia on Systems relevant to Identity. [Technorati links]

November 11, 2014 08:14 PM

This is coverage of at WSJ interview with Amelia Andersdotter the former European Parliament member from the Pirate Party from Sweden. Some quote stuck out for me as being relevant

If we also believe that freedom and individualism, empowerment and democratic rights, are valuable, then we should not be constructing and exploiting systems of control where individual disempowerment are prerequisites for the system to be legal.

We can say that most of the legislation around Internet users protect systems from individuals. I believe that individuals should be protected from the system. Individual empowerment means the individual is able to deal with a system, use a system, work with a system, innovate on a system—for whatever purpose, social or economic. Right now we have a lot of legislation that hinders such [empowerment]. And that doesn’t necessarily mean that you have anarchy in the sense that you have no laws or that anyone can do whatever they want at anytime. It’s more a question of ensuring that the capabilities you are deterring are actually the capabilities that are most useful to deter. [emphasis mine].

This statement is key  “individuals should be protected from the system” How do we create accountability from systems to people and not just the other way around. I continue to raise this issue about so called trust frameworks that are proposed as the solution to interoperable digital identity – there are many concerning aspects to the solutions including what seems to be very low levels of accountability of systems to people.

The quotes from Ameila continued…

I think the Internet and Internet policy are very good tools for bringing power closer to people, decentralizing and ensuring that we have distributive power and distributive solutions. This needs to be built into the technical, as well as the political framework. It is a real challenge for the European Union to win back the confidence of European voters because I think a lot of people are increasingly concerned that they don’t have power or influence over tools and situations that arise in their day-to-day lives.

The European Union needs to be more user-centric. It must provide more control [directly] to users. If the European Union decides that intermediaries could not develop technologies specifically to disempower end users, we could have a major shift in global political and technical culture, not only in Europe but worldwide, that would benefit everyone.

Mike Jones - MicrosoftJWK Thumbprint spec adopted by JOSE working group [Technorati links]

November 11, 2014 08:01 PM

IETF logoThe JSON Web Key (JWK) Thumbprint specification was adopted by the JOSE working group during IETF 91. The initial working group version is identical to the individual submission version incorporating feedback from IETF 90, other than the dates and document identifier.

JWK Thumbprints are used by the recently approved OpenID Connect Core 1.0 incorporating errata set 1 spec. JOSE working group co-chair Jim Schaad said during the working group meeting that he would move the document along fast.

The specification is available at:

An HTML formatted version is also available at:

Kuppinger ColeHow to Protect Your Data in the Cloud [Technorati links]

November 11, 2014 06:07 PM
In KuppingerCole Podcasts

More and more organizations and individuals are using the Cloud and, as a consequence, the information security challenges are growing. Information sprawl and the lack of knowledge about where data is stored are in stark contrast to the internal and external requirements for its protection. To meet these requirements it is necessary to protect data not only but especially in the Cloud. With employees using services such as iCloud or Dropbox, the risk of information being out of control and l...

Watch online

Kuppinger ColeA Haven of Trust in the Cloud? [Technorati links]

November 11, 2014 08:59 AM
In Mike Small

In September a survey was published in Dynamic CISO that showed that “72% of Businesses Don’t Trust Cloud Vendors to Obey Data Protection Laws and Regulations”.  Given this lack of trust by their customers what can cloud service vendors do?

When an organization stores data on its own computers, it believes that it can control who can access that data. This belief may be misplaced given the number of reports of data breaches from on premise systems; but most organizations trust themselves more than they trust others.  When the organization stores data in the cloud, it has to trust the cloud provider, the cloud provider’s operations staff and the legal authorities with jurisdiction over the cloud provider’s computers. This creates many serious concerns about moving applications and data to the cloud and this is especially true in Europe and in particular in geographies like Germany where there are very strong data protections laws.

One approach is to build your own cloud where you have physical control over the technology but you can exploit some of the flexibility that a cloud service provides. This is the approach that is being promoted by Microsoft.  In October Microsoft in conjunction with Dell announced their “Cloud Platform System”.  This is effectively a way for an organization to deploy Dell servers running the Microsoft Azure software stack on premise.  Using this platform, an organization can build and deploy on premise applications that are Azure cloud ready.  At the same time it can see for itself what goes on “under the hood”.  Then, when the organization has built enough trust, or when it needs more capacity it can easily extend the existing workload in to the cloud.   This approach is not unique to Microsoft – other cloud vendors also offer products that can be deployed on premise where there are specific needs.

In the longer term Microsoft researchers are working to create what is being described as a “Haven in the Cloud”.  This was described in a paper at the 11th USENIX Symposium on Operating Systems Design and Implementation.  In this paper, Baumann and his colleagues offer a concept they call “shielded execution,” which protects the confidentiality and the integrity of a program, as well as the associated data from the platform on which it runs—the cloud operator’s operating system, administrative software, and firmware. They claim to have shown for the first time that it is possible to store data and perform computation in the cloud with equivalent trust to local computing.

The Haven prototype uses the hardware protection proposed in Intel’s Software Guard Extensions (SGX)—a set of CPU instructions that can be used by applications to isolate code and data securely, enabling protected memory and execution. It addresses the challenges of executing unmodified legacy binaries and protecting them from a malicious host.  It is based on “Drawbridge” another piece of Microsoft research that is a new kind of virtual-machine container.

The question of trust in cloud services remains an important inhibitor to their adoption. It is good to see that vendors are taking these concerns seriously and working to provide solutions.  Technology is an important component of the solution but it is not, in itself sufficient.  In general computers do not breach data by themselves; human interactions play an important part.  The need for cloud services to support better information stewardship as well as for cloud service providers to create an information stewardship culture is also critical to creating trust in their services.  From the perspective of the cloud service customer my advice is always trust but verify.

November 10, 2014

Ian GlazerThe Only Two Skills That Matter: Clarity of Communications and Empathy [Technorati links]

November 10, 2014 04:49 PM

I meant to write a post describing how I build presentations, but I realized that I can’t do that without writing this one first.

I had the honor of working with Drue Reeves when I was at Burton and Gartner. Drue was my chief of research and as an agenda manager we worked closely in shaping what and how our teams would research. More importantly we got to define the kind of analysts we hired. We talked about all the kinds of skills an analyst should have. We’d list out all sorts of technical certifications, evidence of experience, and the like. But in the end, that list always reduced down to two things. If you have them, you can be successful in all your endeavors. The two most important skills someone needs to be successful in what they do are:

Radical clarity

To make oneself understood and understandable regardless of the situation. Clarity that transcends generations, languages, sets of belief, and knowledge. That is what is required. And that is a far cry from the typical “strong communication skills” b.s. you see on a lot of resumes.

The trick to communicating clearly is realizing that it’s not about the prettiness or exactness of what you say. It’s all in understanding what will be absorbed by and resonate with the other: the person across from you, the audience, the reader, etc. Strip all of the superfluous bits and layers away and get down to that genuine message that you want the other to keep with them.

To do that requires empathy.

Genuinely giving a shit

There is no way to communicate with an audience (or even just another person) unless you actually care about them. You have to care about their wellbeing. You have to be invested in their success. Even when they don’t want to hear your heretical opinion. Even when they have competing ideas. Especially then.

If you start phoning it in, it you just give a stock answer or deliver the same old deck in the same old format, the audience knows and they know that you’ve checked out and are no longer interested in their success. Even if you hold a universal truth and wondrous innovation, the audience will not care because you don’t either.

Clarity and empathy. These aren’t skills you take classes in. Sure, you can refine techniques through training. But you actually get better that these things by simply trying to do them. Just like giving presentations. I’ll tackle that one next…


Ludovic Poitou - ForgeRockHighlights of IRMSummit Europe 2014… [Technorati links]

November 10, 2014 03:10 PM

Powerscourt hotelLast week at the nice Powerscourt Estate, outside Dublin, Ireland, ForgeRock hosted the European Identity Relationship Management Summit, attended by over 200 partners, customers, prospects, users of ForgeRock technologies. What a great European IRMSummit it was !

If you haven’t been able to attend, here’s some highlights:

I heard many talks and discussions about Identity being the cornerstone in the digital transformation of enterprises and organizations. It shifting identity projects from a cost center to revenue generators.

There was lots of focus on consumer identity and access management, with some perspectives on current identity standards and what is going to be needed from the IRM solutions. We’ve also heard from security and analytics vendors, demonstrating how ForgeRock’s Open Identity Stack can be combined with the network security layer or with analytics tools to increase security and context awareness when controlling access.

User Managed Access is getting more and more real, as the specifications are getting close to be finalised and ForgeRock announced the OpenUMA initiative for foster ideas and code around it. See

Chris and Allan around an Internet connected coffee machine, powered by ARMMany talks about Internet of Things and especially demonstration around defining the relationship between a Thing and a User, securing the access to the data produced by the Thing. We’ve seen a door lock being unlocked with a NFC enabled mobile phone, by provisioning over the air the appropriate credentials, a smart coffee machine able to identify the coffee type and the user, pushing the data to a web service, and asking the user for consent to share. There’s a common understanding that all the things will have identities and relations with other identities.

There were several interesting discussions and presentations about Digital Citizens, illustrated by reports from deployments in Norway, Switzerland, Nigeria, and the European Commission cross-border authentication initiatives STORK and eIDAS

Half a day was dedicated to ForgeRock products, with introductory trainings, demonstrations of coming features in OpenAM, OpenDJ, OpenIDM and OpenIG. During the Wednesday afternoon, I did 2 presentations on OpenIG, demonstrating the ease of integration of OAuth2.0 and OpenID Connect to protect applications and APIs, and on OpenDJ, demonstrating the flexibility and power of the REST to LDAP interface.

All presentations and materials are available online as pdf (I will update this article when the videos will also be available). Meanwhile, you can find here a short summary of the Summit in a video produced by Markus.

Powerscourt Estate HousePowerscourt Estate gardens
The summit wouldn’t be such a great conference if there was no plan for social interactions and fun. This year we had a nice dinner in the Powerscourt house (aka the Castle) followed by live music in the pub. The band was great, but became even better when Joni and Eve joined them for a few songs, for the great pleasure of all the guests.


The band15542475489_04dabb40ff_m

Of course, I have to admit that the best part of the IRM Summit in Ireland was the pints of Guinness !

To all attendees, thank you for your participation, the interesting discussions and the input to our products. I’m looking forward to see you again next year for the 2015 edition. Sláinte !

As usual, you can find the photos that I’ve taken at the Powerscourt Estate on Flickr. Feel free to copy for non commercial use, and if you do republish them, I would appreciate getting the credit for them.

[Updated on Nov 11] Added link to the highlight video produced by Markus
[Updated on Nov 13] Added link to the slideshare folder where all presentations have been published

Filed under: Identity Tagged: conference, ForgeRock, identity, IRM, IRMSummit2014, IRMSummitEurope, openam, opendj, openidm, openig, summit

KatasoftBootstrapping an Express.js App with Yeoman [Technorati links]

November 10, 2014 03:00 PM

So, you want to build an Express.js web application, eh? Well, you’re in the right place!

In this short article I’ll hold your hand, sing you a song (not literally), and walk you through creating a bare-bones Express.js web application and deploying it on Heroku with Stormpath and Yeoman.

In the next few minutes you’ll have a live website, ready to go, with user registration, login, and a simple layout.

Step 1: Get Ready!

Before we dive into the code and stuff, you’ve got to get a few things setup on your computer!

First off, you need to go and create an account with Heroku if you haven’t already. Heroku is an application hosting platform that’s really awesome! So awesome that I even wrote a book about them (true story)! But what makes them really great for our example here today is that they’re free, and easy to use.

Once you’ve got Heroku installed you then need to install their toolbelt app on your computer. This is what lets you build Heroku apps.

Next off, you need to have Node installed and working on your computer. If you don’t already have it installed, go visit the Node website and get it setup.

Lastly, you need to install a few Node packages. You can install them all by running the commands below in your terminal:

$ sudo npm install -g yo generator-stormpath

The yo package is yeoman — this is a tool we’ll be using to create an application for us.

The generator-stormpath package is the actual project that yo will install — it is what holds the actual project code and information we need to get started.

Got through all that? Whew! Good work!

Step 2: Bootstrap a Project

OK! Now that the boring stuff is over, let’s create a project!

The first thing we need to do is create a directory to hold our new project. You can do this by running the following command in your terminal:

$ mkdir myproject
$ cd my project

You should now be inside your new project directory.

At this point, you can now safely bootstrap your new project by running:

$ yo stormpath

This will kick off a script that creates your project files, and asks if you’d like to deploy your new app to Heroku. When prompted, enter ‘y’ for yes. If you don’t do this, your app won’t be live :(

NOTE: If you don’t get asked a question about Heroku, then you didn’t follow my instructions and install Heroku like I said to earlier! Go back to Step #1!

Assuming everything worked, you should see something like this:

Tip: For a full resolution link so you can actually see what I’m typing, view this image directly. yo-stormpath-bootstrap

Now, if you take a look at your directory, you’ll notice there are a few new files in there for you to play around with:

We’ll get into the code in the next section, but for now, go ahead and run:

$ heroku open

In your terminal. This will open your browser automatically, and open up your brand new LIVE web app! Cool, right?

As I’m sure you’ve noticed by now, your app is running live, and lets you sign up, log in, log out, etc. Pretty good for a few seconds of work!

And of course, here are some obligatory screenshots:

Screenshot: Yo Stormpath Index Page yo-stormpath-index

Screenshot: Yo Stormpath Registration Page yo-stormpath-registration

Screenshot: Yo Stormpath Logged in Page yo-stormpath-logged-in

Screenshot: Yo Stormpath Login Page yo-stormpath-login


So, as of this very moment in time, we’ve:

So now that we’ve got those things out of the way, we’re free to build a real web app! This is where the real fun begins!

For thoroughness, let’s go ahead and implement a simple dashboard page on our shiny new web app.

Go ahead and open up the routes/index.js file, and add a new function call:

router.get('/dashboard', stormpath.loginRequired, function(req, res) {
  res.send('Hi, ' + req.user.givenName + '. Welcome to your dashboard!');

Be sure to place this code above the last line in the file that says module.exports = router.

This will render a nice little dashboard page for us. See the stormpath.loginRequired middleware we’re using there? That’s going to force the user to log in before allowing them to access that page — cool, huh?

You’ve also probably noticed that we’re saying req.user.givenName in our route code — that’s because Stormpath’s library automatically creates a user object called req.user once a user has been logged in — so you can easily retrieve any user params you want!

NOTE: More information on working with user objects can be found in our official docs.

Anyway — now that we’ve got that little route written, let’s also tweak our Stormpath setup so that once a user logs in, they’ll be automatically redirected to the new dashboard page we just wrote.

To do this, open up your index.js file in the root of your project and add the following line to the stormpath.init middleware:

app.use(stormpath.init(app, {
  apiKeyId:     process.env.STORMPATH_API_KEY_ID,
  apiKeySecret: process.env.STORMPATH_API_KEY_SECRET,
  application:  process.env.STORMPATH_URL || process.env.STORMPATH_APPLICATION,
  secretKey:    process.env.STORMPATH_SECRET_KEY,
  redirectUrl: '/dashboard',

The redirectUrl setting (explained in more detail here), tells Stormpath that once a user has logged in, they should be redirected to the given URL — PERFECT!

Now, let’s see if everything is working as expected!

$ git add --all
$ git commit -m "Adding a new dashboard page!"
$ git push heroku master

The last line there, git push heroku master, will deploy your updates to Heroku. Once that’s finished, just run:

$ heroku open

To open your web browser to your app page again — now take a look around! If you log into your account, you’ll see that you’ll end up on the new dashboard page! It should look something like this:

Screenshot: Yo Stormpath Dashboard Page yo-stormpath-dashboard

BONUS: What happens if you log out of your account, then try visiting the dashboard page directly? Does it let you in? HINT: NOPE!


So if you’ve gotten this far — congrats! You are awesome, amazing, and super cool. You’re probably wondering “What next?” And, that’s a great question!

If you’re hungry for more, you’ll want to check out the following links:

They’re all awesome tools, and I hope you enjoy them.

Lastly — if you’ve got any feedback, questions, or concerns — leave me a comment below. I’ll do my best to respond in a timely fashion.

Now GO FORTH and build some stuff!

November 09, 2014

OpenID.netErrata to OpenID Connect Specifications Approved [Technorati links]

November 09, 2014 07:28 PM

Errata to the following specifications have been approved by a vote of the OpenID Foundation members:

An Errata version of a specification incorporates corrections identified after the Final Specification was published.

The voting results were:

Total votes: 46 (out of 194 members = 24% > 20% quorum requirement)

The original final specification versions remain available at these locations:

The specifications incorporating the errata are available at the standard locations and at these locations:

— Michael B. Jones – OpenID Foundation Board Secretary

OpenID.netImplementer’s Draft of OpenID 2.0 to OpenID Connect Migration Specification Approved [Technorati links]

November 09, 2014 07:26 PM

The following specification has been approved as an OpenID Implementer’s Draft by a vote of the OpenID Foundation members:

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification.

This Implementer’s Draft is available at these locations:

The voting results were:

Total votes: 46 (out of 194 members = 24% > 20% quorum requirement)

— Michael B. Jones – OpenID Foundation Board Secretary

November 08, 2014

Anil JohnWhy Multi-Factor and Two-Factor Authentication May Not Be the Same [Technorati links]

November 08, 2014 06:20 PM

Two Factor Authentication is currently the bright and shining star that everyone, from those who offer ‘free’ services to those who offer high value services, wants to know and emulate. When designing such implementations, it is important to understand the implications to identity assurance if the two-factor implementation does not correctly incorporate the principles of multi-factor authentication.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

The opinions expressed here are my own and do not represent my employer’s view in any way.

November 07, 2014

Julian BondSaccades and LED lights. [Technorati links]

November 07, 2014 06:37 PM

Paul MadsenApplication unbundling & Native SSO [Technorati links]

November 07, 2014 04:33 PM
You used to have a single application on your phone from a single social provider, you likely now have multiple.

Where the was Google Drive, there is now Sheets, Docs, and Slides - each individual application optimized for a particular document format.

Where the chat function used to be a tab within the larger Facebook application , there is now Facebook Messenger - a dedicated chat app.

LinkedIn has 4 individual applications.

The dynamic is not unique to social applications.

 According to this article
Mobile app unbundling occurs when a feature or concept that was previously a small piece of a larger app is spun off on it’s own with the intention of creating a better product experience for both the original app and the new stand-alone app.
The unbundling trend seems mostly driven by the constraints of mobile devices - multiple functions hidden behind tabs may work on a desktop browser, but on a small screen, they may be hidden and only accessible through scrolling or clicking.

That was the stated justification for Facebook's unbundling of Messenger
We wanted to do this because we believe that this is a better experience. Messaging is becoming increasingly important. On mobile, each app can only focus on doing one thing well, we think. The primary purpose of the Facebook app is News Feed. Messaging was this behavior people were doing more and more. 10 billion messages are sent per day, but in order to get to it you had to wait for the app to load and go to a separate tab. We saw that the top messaging apps people were using were their own app. These apps that are fast and just focused on messaging. You're probably messaging people 15 times per day. Having to go into an app and take a bunch of steps to get to messaging is a lot of friction.
Of course, unbundling clearly isn't for everybody ....

I can't help but think about unbundling from an identity angle. Do the math - if you break a single application up into multiple applications, then what was a single authentication & authorization step becomes multiple such steps. And, barring some sort of integration between the unbundled applications (where one application could leverage a 'session' established for another) this would mean the user having to explicitly login to each and every one of those applications.

The premise of 'one application could leverage a session established for another' is exactly that which the Native Applications (NAPPS) WG in the OpenID Foundation is enabling in a standardized manner. NAPPS is defining both 1) an extension and profile of OpenID Connect by which one native application (or the mobile OS) can request a security token for some other native application 2) mechanisms by which the individual native applications can request and return such tokens.

Consequently, NAPPS can mitigate (at least one of) the negative implications of unbundling.

The logical end-state of the trend towards making applications 'smaller' would appear to be applications that are fully invisible, ie those that the user doesn't typically launch by clicking on an icon, but rather receives interactive notifications & prompts only when relevant (as determined by the application's algorithm). What might the implications of such invisible applications be for identity UX?

Rakesh RadhakrishnanESA embedded in EA [Technorati links]

November 07, 2014 12:57 AM
Similar to "Secure by Design" or "Privacy Baked In", to me no Enterprise Architecture "EA" initiative can succeeds without a SOLID Enterprise Security Architecture "ESA" in place. An ESA is also driven by Business Directions/Business Strategy and takes Business Risks as the driving force to identify an "As-IS" state and an "Aspired" state. While ESA focuses on  Security, Data Privacy, Incident Response Modernization/Optimization, Compliance and more, and EA focusses more on the Business Process Modernization, Business Application and Relevant IT Infrastructure (private and public cloud). All the Systems Modernization Programs, NG SDLC, Data Center Optimization and more that are driven by an EA effort heavily rely on the success and the foundation setup by an ESA. An ESA also relies on EA program - especially the Enterprise Data Architecture (driven by enterprise wide MDM and Big Data initiatives) to identify and classify high risk and medium risk data and their respective data flow. Therefore a successful EA team will comprise of specialist EA's focused on EA for Cloud/Infrastructure, ESA, Enterprise Data Architects, Enterprise Application Architects, Enterprise Integration Architects, and more, who work as a team and collaborate extensively (collaboration leading to innovative ways of integration). Here is an excellent white paper describing the synergies of EA (TOGAF 9) and ESA (SABSA). Adopting an integrated methodology TOGAF with SABSA or TOGAF ADM with SEI ADDM (for Secure SDLC), is critical as each methodology is focused within one domain (SEI ADDM for Secure SDLC, TOGAF for EA, SABSA for ESA, ITIL for Enterprise Service Management, OMG MDA for Enterprise Data and Meta data Architecture) or Oracle's EA Framework for Enterprise Information Architecture and more). This paper was one that I authored in 2006, that talks to these integrated views - as I had just gotten my Executive Masters in IT Management from University of Virginia, along with my TOGAF certification as an EA and SEI Certification as a Software Architect. Sun Microsystems also heavily invested in training their battalion of employees on Six Sigma - what was then referenced as Sun Six Sigma, along with ITIL and Prince 2. I wanted to align these tools and techniques so that they made some sense when utilized together. This paper also aligns all these methodologies for EA, ESA, Enterprise SW Architecture, and more.
November 06, 2014

Rakesh RadhakrishnanInvesting in Systemic Security - An enabler or an impediment [Technorati links]

November 06, 2014 03:39 PM
An enterprise in any industry today can function as a business if and only if;
a) it can protect its intellectual property that act as a core competitive differentiator
b) it can survive a disasters -like earth quake - not just via collected insurance money - but also continued operations
c) it can safely and compliantly extend to the cloud computing models to derive the economies of scale promised by clouds
d) it can maintain confidentiality and privacy of data - the reputational damage caused by one data breach can kill a business completely
e) and it can ensure uptime and availability of its transactional site (internet presence of ecommerce) and communication and collaboration tools (over the internet again).
Therefore if a business entity needs to survive and thrive in todays world its OXYMORON to see "Security (and Security Investments) as an Impediment to Business". To me investing in security is investing in the "Quality" aspects of a business and hence has always been perceived as a "true enabler". Investing in my health and immune system is an enabler for me to be more productive physically and mentally - which in turn helps me personally, physically and professionally. The same is true with IT Security Investments - Anywhere between 0.5% to 1% of a business entities revenue is expected to be its annual IT security budget (for example $100m to $200m for a 20 billion dollar business entity), when a typical enterprise is spending 5% on IT as a whole.
In addition to investing prudently with a Systemic Security Architecture (topic for another blog post), its equally important to make an organization's culture (every single employee) - security conscious (a topic for another blog post).

KatasoftHosted Login and API Authentication for Python Apps [Technorati links]

November 06, 2014 03:00 PM

If you’re building Python web apps — you might have heard of our awesome Python libraries which make adding users and authentication into your web apps way easier:

What you probably didn’t know, however, is that our Python library just got a whole lot more interesting. Last week we made a huge release which added several new features.

The Basics

Since the beginning of time, our Python library has made creating user accounts, managing groups and permissions, and even storing profile information incredibly easy.

If you’re not familiar with how this works, take a look at the code below:

from stormpath.client import Client

client = Client(id='xxx', secret='xxx')

# Create an app.
app = client.applications.create({
    'name': 'myapp',
}, create_directory=True)

# Create a user.
account = app.accounts.create({
    'given_name': 'Randall',
    'surname': 'Degges',
    'email': '',
    'password': 'iDONTthinkso!222',
    'custom_data': {
        'secret_keys': [

# Create a group.
admins_group = app.groups.create({ 'name': 'admins' })

# Add the user to the group.

The code above creates a new user account, stores some account information, creates a group, and puts that user into the group — all in a few lines of code.

With Stormpath, all users are stored on Stormpath’s servers, where we encrypt the user information and provide abstractions and libraries to make handling authentication as simple as possible.

NOTE: You can install our library via PyPI, the Python package manager: pip install stormpath.

ID Site

A while back, some of us over here were chatting about ways to make authentication better, and the idea of ID Site was born.

What if, as a developer, you didn’t have to render views and templates to perform common authentication tasks?

What if, all you had to do was redirect the user to some sub-domain (like, and all of the authentication and registration stuff would be completely taken care of for you?

Furthermore — what if you could fully customize the way your login pages look using all the latest-and-greatest tools?

It would be totally awesome, right?

Well — that’s what ID Site does!

ID Site is a hosted product we run that allows you to easily handle complex authentication rules (including SSO, social login, and a bunch of other stuff), while providing users a really nice, clean experience.

And as of our latest Python release — you can now use it really easily!

To redirect a user to your ID Site to handle authentication stuff, all you need to do is generate a secure URL using our helper functions:

url = app.build_id_site_redirect_url('')
# Then you'd want to redirect the user to url.

By default, the normal login page looks something like this (depending on whether or not you have social login and other features enabled):


After the user signs in, they’ll be redirect back to whatever URL you specify as a parameter above — then you can create a user session and persist the user’s information — this way you know they’ve been logged in.

Again — this is super easy.

Assuming you’re writing code to handle the redirect, you’d do something like this:

result = app.handle_id_site_callback(request)
# result.account is the user's account.

Bam! And just like that, you can register, login, and logout users.

API Keys and Authentication!

Let’s say you’re building a REST API, and need to ensure only certain users have access to the API. This means you’ve got to generate API keys for each user, and authenticate incoming API requests.

Depending on the tools and libraries you’re using, this could be either a very simple or very painful task.

With our latest Python release, you can now generate as many API keys as you want for each of your users. This means building API services just got a wholeeeeee lot easier:

# Generate an API key for a user.
key = account.api_keys.create()
print, key.secret

Each API key has two parts:

Once you’ve generated an API key for a user, and given that key TO the user, they can then use their API key to authenticate against your API service using either:

To authenticate a user via HTTP Basic Authentication, you write code that looks like this:

# Assuming the user sent you their API credentials properly, by passing in
the `headers` option our library will handle authentication for you.
result = app.authenticate_api(
# result.account is now the user's account object!

The above code will work properly when a developer sends an authenticated API request of the form:

GET /troopers/tk421/equipment 
Accept: application/json
Authorization: Basic MzRVU1BWVUFURThLWDE4MElDTFVUMDNDTzpQSHozZitnMzNiNFpHc1R3dEtOQ2h0NzhBejNpSjdwWTIwREo5N0R2L1g4

For OAuth flows, things are equally simple — firstly, you need to request an OAuth token by exchanging your API keys for an OAuth token:

POST /oauth/token
Accept: application/json
Content-Type: application/x-www-form-urlencoded


When this request is made, on the server-side you can generate a token by calling the authenticate_api method:

result = app.authenticate_api(
# result.token is now the user's OAuth token object!

From this point on, the developer can now pass that token as their credentials:

GET /troopers/tk421/equipment 
Accept: application/json
Authorization: Bearer 7FRhtCNRapj9zs.YI8MqPiS8hzx3wJH4.qT29JUOpU64T

And if you want to secure an API endpoint with OAuth, you just use the same authenticate_api method as before:

result = app.authenticate_api(
# result.token is now the user's OAuth token object!

Cool, right?!

Using our new API stuff, you can easily build out a public (or private) facing API service complete with both HTTP Basic Authentication and OAuth2.

Github and LinkedIn

Lastly, we’ve also added two brand new social providers to platform: Github and LinkedIn.

This means that if you want to allow your web users to log into your app via:

You can easily do so with just a few lines of code!

Future Stuff

We’re still working really hard to improve our Python library — we’re cleaning up our docs, simplifying our internal APIs, and doubling down on our efforts to make it the most awesome, simple, and powerful tool out there.

If you have any feedback (good or bad), please send us an email! We’d love hear from you:

Kuppinger ColeLeadership Compass: IAM/IAG Suites - 71105 [Technorati links]

November 06, 2014 01:09 PM
In KuppingerCole

Leaders in innovation, product features, and market reach for IAM/IAG Suites. Integrated, comprehensive solutions for Identity and Access Management and Governance, covering all of the major aspects of this discipline such as Identity Provisioning, Federation, and Privilege Management. Your compass for finding the right path in the market.


Julian BondIf Harry Potter is so clever, why isn't he dealing with climate change, pollution and the energy crisis... [Technorati links]

November 06, 2014 08:35 AM
If Harry Potter is so clever, why isn't he dealing with climate change, pollution and the energy crisis? And peace in the Middle East. 
 Seriously, Why Isn't Hogwarts Using All That Magic To Explore Space? »
We've all asked it at some point or another: if the wizards of the Harry Potter universe can conjure up such amazing miracles, why don't they use it to solve the energy crisis or explore the wonders of the universe? Boulet's latest webcomic dreams up all magical possibilities for the wizarding world.

[from: Google+ Posts]

Rakesh RadhakrishnanBAN BYOD and Big Data [Technorati links]

November 06, 2014 07:00 AM
Here is a link to the paper and presentation I am working on called "Strategic Technology Architecture Roadmap" based on the digital disruption that is taking shape especially for digital health. There are hundreds of papers on Body Area Networks and several conferences that talk to the same and several sub topics within BAN as well (such as nano-bots traversing a nano-net for blood flow, etc.). To me these sensor devices (drug diagnostics, drug discovery personalization and drug delivery) connected together as a BAN can create a comprehensive (diagnostic telemetry) continuous (real time collection) and customized (personalized) loop of diagnostics and delivery (very useful for patients quality of life and efficacy).

Now these BAN's talk to Clouds via a Mobile or a BYOD (like the iPhone 6 with its tons of mobile medical apps), over 4GLTE today, and in the near future 5G networks (GB uploads/download speeds per seconds). These powerful end point now become the conduit and gateway between these personal small networks (such as BAN, VAN for Vehicle are networks, HAN for Home area networks, and more). Hence BYOD device securely bootstrapping into the access networks (4G and 5G) becomes very critical for secure paths to the clouds. The 3rd dimension in these developments is the Big Data technologies (like map reduce, tensor, tuples, DDS, metadata and more) that allow for exceptional high speed analytics for these high volume, multi variety (multi-media like data types) and high velocity data sets moving at GB per seconds bi directionally.

This is expected to strain the back end systems and hardware (that hosts these service clouds) and create bottlenecks in the cloud systems, and then, I see that Oracle has released systems based on T5 CPU's (see this video), truly amazing technology - something like 128 cores or 256 cores (the cores themselves communicate at GB ps internal shared memory) with 4Terabyte system memory so you can store the entire database in memory if needed - per session/per user/per patient..

To me as we move forward from 2015 to 2020 - the health care industry- think fit bit, telecom industry -think iphone6 and 5G and cloud computing - think SOA in steroids (IT industry) are going to discover new USAGE models for these disruptive digital technologies (IOT, BigData, Social, Mobile, etc.), which will actually act as a catalyst for the next economic boom especially led by these industries, as we are now laying the infrastructural foundation for an innovation driven economic future  (akin to the national highway in the 1920's revitalizing the us economy and spurring more new industries).

November 05, 2014

Paul MadsenSticky Fingers [Technorati links]

November 05, 2014 06:51 PM
Digits is a new phone-number based login system from Twitter.
Digits is a simple, safe way of using your phone number to sign in to your favorite apps.
Note that Digits is not just using your phone to sign in (there are a number of existing mobile-based systems), but your phone number. 

Digits is an SMS-based log in system (unlike mobile OTP systems like Google Authenticator). When trying to login to some service, the user supplies their phone number, at which they soon receives an SMS, this SMS carrying a one-time code to be entered into the login screen. After Twitter's service validates the code, the application can be (somewhat) confident that the user is the authorized owner of that phone number.

Now, the above makes it clear that Digits relies on only a single factor, ie a 'what you have' of the phone associated with the given phone number. This post even brags that you need not worry about any additional account names or passwords. But that same post claims that Digits is actually more than a single factor, an easy way for your users to manage their Digits accounts and enable two-factor authentication
As much as I squint, I can see no other factor in the mix. (And it sure isn't the phone number.)

Digits apparently also has privacy advantages.
Digits won't post on your behalf, so what you say and where you say it is completely up to you
Well, to be precise, Digits can't post on your behalf ... And is it not somewhat ironic that Twitter touts as an advantage of Digits the fact that it is not hooked into your Twitter account??

Presumably this is presented in contrast to the existing 'Sign-in with Twitter' system, use of which can allow a user to authorize applications to post to Twitter on their behalf (as the system is based on OAuth 1.0).

But of course, 'Sign-in with Twitter' allows applications to post on behalf of users only because Twitter made the business decision to make this permission part of the default set of authorizations. Twitter could have chosen to make their consent more granular and tightened up the default.

Dick Hardt analyzed Digits and hilited two fundamental issues of using phone numbers as identifier

  1. the privacy risk associated with a user presenting the same identifier to all applications (as it enables subsequent correlation amongst those applications without the user's consent). It's pretty trivial to spin up new email addresses (even disposable ones) to segment your online interactions and prevent correlation. Is that viable for phone numbers?
  2. that applications generally aren't satisfied with only knowing that who a particular user is, but almost always want to know the what as well, ie their other identity attributes, social streams etc

Dick, having made the second point, perversely then conjectures that it may not be an issue
as mobile apps replace desktop web sites, the profile data may not be as relevant as it was a decade ago
I can't imagine why the native vs browser model would impact something as fundamental as wanting to understand your customer?  

Twitter actually tries to position this limitation as a strength of Digits
Each developer is in control with Digits. It lets you build your own profiles and apps, giving you the security of knowing your users are SMS-verified. 
The motivation for becomes a bit clearer when you read more
We built Digits after doing extensive research around the world about how people use their smartphones. What we found was that first-time Internet users in places like Jakarta, Mumbai and São Paulo were primarily using a phone number to identify themselves to their friends.
Twitter must have looked at their share in these markets and determined they needed a different way to mediate user's application interactions.

Source -

KatasoftBuild an API Service with Oauth2 Authentication, using Restify and Stormpath [Technorati links]

November 05, 2014 06:00 PM

Building APIs is a craft; you have you have to balance the integrity of your data model with with the convenience needs of your API consumers. As you build an API, you will come across these questions:

In this article I’ll focus on the concerns of authentication and access control, specifically within the context of Restify – a Node.js Framework for building APIs. I will walk you through the process of building an API with the Restify framework and how you can secure it with Stormpath’s API Authentication features.

We’ll be using the Oauth2 Client Credentials workflow as an authentication strategy and JWTs for the format of the tokens.

I’ll touch on client libraries and the resource design. That section is heavily influenced by how we have designed our own API and I encourage you to read our principles on Designing REST JSON APIs and Node API Clients

Why Restify?

Restify is an HTTP framework for Node.js that is focused on building API applications. It differs from Express (the other Node.js web framework) in it’s focus on APIs. Express gives you a lot of things you need for web applications, like templating engine and component-like “middleware” design. Because Restify is focused on APIs it does not provide those things. Instead it provides things like DTrace support and request throttling – very important tools for API services.

What is Stormpath?

Stormpath is an API service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

In short: we make user account management a lot easier, more secure, and more scalable than what you’re probably used to. Our sample application will use Stormpath to provision API keys for the users of our API.

Ready to get started? Register for a free developer account at

Why Oauth2 and JWT?

The current de-facto practice for API Authentication is to provide an API Key/Secret combination to the consumer of your API and have them submit this as the Authorization header on every request, which looks like this:

Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==

The value after Basic is a base64 encoded version of they key and secret. This will be sent on every request. Assuming you use HTTPS, this is a secure way to authenticate users.

In our demo application we will take this a step further and use Oauth2, specifically the client-credentials workflow. In this workflow the user supplies the Basic Auth once and then receives a token that contains “claims” which can be used for authentication (and access control!) on subsequent requests. The token is always validated by your server, and because it already contains the claims, it is stateless.

Here is an overview of what the flow looks like:

Oauth2 Client Credentials Workflow Basic Auth

The stateless, portable nature of the token makes this strategy superior to Basic Auth. It also helps to future-proof your application for when your customers ask you for it.

At Stormpath, we use JWT as the token format because we believe it’s a great way to structure the internal data of the token. If you’re looking to build a Single-Sign-On (SSO) architecture you will find JWT very friendly to that use case. Also: it’s basically taken off as the standard for Oauth tokens.

For more see Claims Based Identity and JSON Web Token (JWT)

Our Sample Application – The Things API

For our demo application we’re going to build the Things API.

Things API

We have a collection of things, that collection will be available at /things. We want to return a collection of all things when someone makes a GET request of that URL. If someone posts to it we will create a new Thing in the Things collection and we will assign it an ID.

All thing resources will be available as /thing/:id and we want to allow deletion of things.

All users (including anonymous users) must be able to read the things collection. Only authenticated users are allowed to post new things. Only trusted users are allowed to delete things. Trusted users will be in a special group (we will use Stormpath to manage the user group state).

We’ll be creating three separate node modules: a server, a client library, and a example app that uses the client library. Our code structure will look like this:

|--things-api-server/   <-- the API server
|   |--server.js
|   |--things-db.js
|   |--package.json
|--things-api/          <-- the API client library
|   |--index.js
|   |--register.js
|   |--package.json
|--developer-app/       <-- the client demo app
|   |--app.js
|   |--package.json

As we work through this demo, we will be context switching between these different folders and files. If you get lost or aren’t sure where to paste something please see the example files in the git repo to get a preview of what the final code will look like

HTTPS – Make Sure You Use It

You MUST use HTTPS in production!

In this demo we will work on our local machine and will not using HTTPS – but you MUST use HTTPS in production. Without it, all API authentication mechanisms are compromised.

You have been warned.

Server Prep – Create the Server Module

If you don’t already have Node.js on your system, head over and install it on your computer. In our examples I will be using a Mac, all commands you see should be entered in your Terminal (without the $ in front – that’s a symbol to let you know that these are terminal commands).

First, create a folder for this module and change into that directory:

$ mkdir things-api-server
$ cd things-api-server

Now that we are in the folder we want to create a package.json file for this module. This file is used by Node.js to keep track of the libraries (aka modules) your module depends on. To create the file:

$ npm init

You will be asked a series of questions, for most of them you can just press enter to allow the default value to be used. I decided to call my main file server.js, I set my own description and set the license to MIT – everything else I left as default.

Now install the required packages:

$ npm install --save restify stormpath-restify uuid underscore

The save option will add this module to your dependencies in package.json. Here is what each module does:

Note: Restify parlance uses filter in lieu of middleware. Both are valid, but for consistency we will use filter here.

Gather Your Stormpath API Credentials and Application Href

We will be using Stormpath to manage our users and their API keys, and our server will need to communicate with the Stormpath API in order to do this. If you haven’t already signed up for a free Stormpath developer account you can get one at

Like all APIs the communication between your app and Stormpath is secured with an “API Key Pair”. You can download your API key pair as a file from your dashboard in the Stormpath Admin Console. Retain this file – we will use this in a moment.

While you are in the Admin Console you want to get the href for your default Stormpath Application. In Stormpath, an Application object is used to link your server app to your user stores inside Stormpath. All new developer accounts have an app called “My Application”. Click on “Applications” in the Admin Console, then click on “My Application”. On that page you will see the Href for the Application. Copy this — we will need it shortly.

Coding Time – Build the Server Code (server.js)

It’s time to create the actual server – the Node.js process that serves API requests. You can do that from Sublime Text or you can do this in the terminal:

$ touch server.js

Now open that file and paste in this boilerplate to get Restify up and running:

var restify = require('restify');
var host = process.env.HOST || '';
var port = process.env.PORT || '8080';

var server = restify.createServer({
  name: 'Things API Server'


server.use(function logger(req,res,next) {
  console.log(new Date(),req.method,req.url);

server.on('uncaughtException',function(request, response, route, error){

server.listen(port,host, function() {
  console.log('%s listening at %s',, server.url);

That’s the bare-bones you need to get the server running. What that code does:

You can take a sneak peak at your server by running it like so:

$ node server.js

If all is well you will see this message in the terminal:

Things API Server listening at

At this point you can try it out by requesting a URL from the server. We’ll use Curl for the example:

$ curl

Because we haven’t created any routes in the server yet, you will get a “Resource Not Found” message:

{"code":"ResourceNotFound","message":"/ does not exist"}

If you inspect the details of this message by specifying verbosity with Curl, you’ll see that the status code is set to 404:

$ curl -v

* About to connect() to port 8080 (#0)
*   Trying
* Adding handle: conn: 0x7fb32c004000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fb32c004000) send_pipe: 1, recv_pipe: 0
* Connected to ( port 8080 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.30.0
> Host:
> Accept: */*
< HTTP/1.1 404 Not Found
< Content-Type: application/json
< Content-Length: 56
< Date: Mon, 03 Nov 2014 04:58:27 GMT
< Connection: keep-alive
* Connection #0 to host left intact

{"code":"ResourceNotFound","message":"/ does not exist"}

Now let’s move on and register some route handlers!

Set Up Your Things Database

In a real world situation you would use a proper database engine, such as MongoDB or PostgreSQL. For the simplicity of this demo we will create a simple in-memory database that only lives for the duration of the server. Create a file called things-db.js and place the following into it:

var uuid = require('uuid');
var _ = require('underscore');

module.exports = function createDatabase (options) {

  var baseHref = options.baseHref;

  var things = {};

  function thingAsResource(thing){
    var resource = _.extend({
      href: baseHref +
    return resource;

  function thingsAsCollection(){
    return Object.keys(things).map(function(id){
      return thingAsResource(things[id]);

  return {
    all: function(){
      return thingsAsCollection();
    getThingById: function(id){
      var thing = things[id];
      return thing ? thingAsResource(thing) : thing;
    deleteThingById: function(id){
      delete things[id];
    createThing: function(thing){
      var newThing = _.extend({
        id: uuid()
      var newRef = things[] = newThing;
      return thingAsResource(newRef);

Now we need to require this database in our server.js and create an instance of the database. Place this in your server.js file, just below the host and port declarations:

var thingDatabse = require('./things-db');

var db = thingDatabse({
  baseHref: 'http://' + host + ( port ? (':'+ port): '' ) + '/things/'

That creates a database instance and tells it the base URL of the server so that it can assign the appropriate href to resources.

Set Up the GET Routes

Now that we have our DB instance setup, we can wire up a route handler to it. We’ll do the collection and single-resource URLs first, as they do not require any authentication. Insert these route handlers above your server.listen statement but after your server.use statements:


  var id =;
  var thing = db.getThingById(id);
    next(new restify.errors.ResourceNotFoundError());

Restart your server (Ctrl + C to kill the process in your terminal) and try it again with Curl. If you request the things collection you will get an empty collection (seen as empty array brackets):

$ curl

Trying to get a resource that does not yet exist will result in a 404 message:

$ curl


Great! Now let’s actually create some things by setting up a POST handler for the collection. To do that, we need to setup authentication for the routes.

Pro trip: use a file watecher like nodemon to automatically restart your server as you edit it.

Set Up Authentication

As mentioned above we will use the Oauth2 client credentials workflow. This means we need a POST handler for /oauth/tokens and some code to exchange the Basic Auth credentials for a JWT. We will also need a filter for any route that requires the JWT, so we can assert it’s existence and validity before allowing the rest of the route handlers to be processed.

To meet these requirements we will leverage Stormpath and its API Key authentication features. The Stormpath Node SDK contains a method on application instances, authenticateApiRequest, which does everything we just mentioned! To make it even easier we’ve wrapped that method in the stormpath-restify module as a filter so it’s even easier to use.

In order to use these filters you will need to configure a “filter set”, this is a set of filters that are bound to your Stormpath Application. To create this filter set you need to add this to the top of your file, place it after the restify require:

var stormpathRestify = require('stormpath-restify');

var stormpathConfig = {

var stormpathFilters = stormpathRestify.createFilterSet(stormpathConfig);

The variable stormpathFilters is now an object with some other functions that you can use to create the necessary authentication filters.

To use the Oauth filter, simply create a new one and assign it to a variable, you can paste this below the code we just did:

var oauthFilter = stormpathFilters.createOauthFilter();

Now you can register a post handler which uses this filter as the only filter. Paste this after your server.use statements:'/oauth/token', oauthFilter);

That’s it! If your API user posts a valid API Key pair to that URL, they will receive a token in exchange. If it’s not valid they will get a descriptive error.

Once your user has obtained a token they will use it to post a new thing. We’ll create a POST hander for this, and apply the Stormpath filter to it as well. This will check that the token is valid and if so allow the POST to continue into our handler. If the token is not valid, an error will be sent and our handler will not be reached. Here is the handler to paste in below your other routes:'/things', [oauthFilter, function(req,res){

Try It Out – Token Exchange

In order to try our new POST endpoint we need to do the token exchange and obtain a JWT.

At this point let’s pretend we are a consumer of our API and need to provision an account. Later we’ll discuss how to automate this, but for this first user you can head over to the Stormpath Admin Console and go to your “My Application” and create a dummy account in the Directory of that application. After creating the account, create an API Key pair (available on the Account details view).

Once you have the key pair, you can exchange those credentials for a JWT by using the new /oauth/tokens route on your server:

$ curl -u ID:SECRET -X POST

You’ll get the following JSON response in your terminal with the “access_token” value:


Copy this “access_token” value and use it in your next request when you create a thing:

$ curl -X POST -H "Authorization: Bearer YOUR_TOKEN_HERE" -H "Content-Type: application/json;charset=UTF-8" -d '{"myThing":"isAnAwesomeThing"}'

The API should respond with the thing you’ve created, and its href identifier:


If we ask for the entire collection again, we will see it in the set:

$ curl

Pretty sweet, right? You’ve now got an API with authorization, resources and collections. But… working with Curl gets pretty clunky once you start dealing with tokens. We still need to build some other features into our API, but I want to switch over to the client library for a little while, so it’s quicker to use our API and ensure things are working as we expect.

Build the Client Library

As mentioned above, you should check out Les Hazlewood’s post on Designing Node API Clients. Our client library will look very similar: it abstracts how we interact with resources and collections and exports an API that is developer-friendly with well-named methods.

For our Things API we’re going to use Restify in the client as well. In addition to the server library we’ve built, Restify includes a client library you can use to build your own client. These great little clients do a lot of the underlying HTTP and content type work for you.

The stormpath-restify library includes an Oauth2 client that extends the JSON client with credential exchange and token work – all that stuff that we just did with Curl.

The client library for your API will be provided to your end-users as a node module, published on NPM, so we should create a new project for this. Create a new folder and do the npm init process, as we did for the server. I’ll call mine the “things-api” – a predictable name end-users will recognize when they look for a client for my service:

$ cd ..
$ mkdir things-api
$ cd thing-api
$ npm init
$ npm install --save restify stormpath-restify underscore prompt
$ touch index.js

We will use index.js as the entry point for this module, as it’s very straightforward. You may want something more elaborate as your client module evolves.

Paste this into your index.js as a starting point:

var oauthClient = require('stormpath-restify/oauth-client');

module.exports = {
  createClient: function(opts){
    opts.url = opts.url || '';

    // This creates an instance of the oauth client,
    // which will handle all HTTP communication with your API

    var myOauthClient = oauthClient.createClient(opts);

    // Here we directly bind to the underlying GET method,
    // as this is a simple request

    myOauthClient.getThings =

    return myOauthClient;

With that you can now export a client that has a method, getThings, which gets all the things in the collection and returns them to the developer. Super simple. What does it look like for them to use this client library? We’ll cover that in the next section.

While the collection get is simple and can be directly bound to the underlying get method, the add thing method will have some more logic because we want to do some “client side” validation in order to assert that the data is correct before we even try posting it to the server.

Here is what that looks like, paste this into index.js after the getThings method:

myOauthClient.addThing = function addThing(thing,cb){
  if(typeof thing!=='object'){
      cb(new Error('Things must be be an object'));

Build the Developer App

Before switching back to the server, let’s also build our developer demo app. This shows you how a developer would use your client library to consume your API. Create a new folder for this module and initialize it with dependencies and an app.js file:

$ cd ..
$ mkdir developer-app
$ cd deverloper-app
$ npm init            # use app.js as the main entry
$ npm install --save prettyjson
$ touch app.js

Now paste the following into your app.js:

// Her we use a local, relative require path to require your
// client library. When you publish on NPM you should change
// it to the absolute module name

var thingsApi = require('../things-api');

var prettyjson = require('prettyjson');

var client = thingsApi.createClient({

// Read all the things in the collection

client.getThings(function(err,things) {
    console.log('Things collection has these items:');

// Create a new thing in the collection

    myNameIs: 'what?'
  function(err,thing) {
      console.log('New thing created:');

Look familiar? If you’ve used API clients before this definitely looks familiar – but this time YOU created it and it’s for your API :)

You can demo your app by invoking it in the Terminal (make sure that you have the server running in another terminal):

$ node app.js

Round Out the Server – Delete for Trusted Users

We have one last handler to implement in the server, and that is the DELETE method for trusted users. We want users in the ‘trusted’ Group to be able to delete resources from the things collection.

We’re going to setup another filter, using Stormpath to help us out. stormpath-restify provides a group filter which allows us to assert that a user is in a given group, in this case a group called trusted (you can create this group in the Stormpath Admin Console. If the user is in the group, we pass control to your handler, otherwise we issue a 403 error response. If you wish to customize the error response, you can pass an errorHandler property to createGroupFilter, a function to receive the arguments (err,req,res,next).

To create the trusted group filter, paste this below your other filter invocations:

var trustedFilter = stormpathFilters.createGroupFilter({
  inGroup: 'trusted'

This filter will assert that the authenticated user is in the trusted group.

Let’s use this new filter to setup our DELETE handler:

  var id =;
  var thing = db.getThingById(id);
    next(new restify.errors.ResourceNotFoundError());

Now that the server can accept DELETE requests, we want to add a corresponding convenience method to our client. Paste this method into your client library, below the other methods:

myOauthClient.deleteThing = function deleteThing(thing,cb){
  if(typeof thing!=='object'){
      cb(new Error('Things must be be an object'));
  if(typeof thing.href!=='string'){
      cb(new Error('Missing property: href'));
      cb(err); // If the API errors, just pass that along
      // Here you could do something custom before
      // calling back to the original callback

This method ensures the developer is passing an actual thing object, with an href, before making the request of the server.

At this point your developer can use the client to delete things:

    console.log('Thing was deleted');

If you haven’t created the test group yet, or haven’t added the account to it, you will get the 403 error when we try to delete the item. To create the group and add the user to it you can use the Stormpath Admin Console or our Node SDK to talk directly with the API to create the group and the account membership

How To Provision Your API Keys

The last thing to discuss is how to provision API Keys for your end developers. Clearly you wouldn’t want to use the Stormpath Admin Console to create every API Key pair. Instead, you’ll want to automate this process.

From a product perspective, I suggest you offer a web-based landing page where someone can create an account and then view a dashboard where they can provision their own API keys.

Stormpath can help with this process as well. We have great workflows around account creation and email verification. For building the web-based component of your registration workflow, I suggest trying out our stormpath-express library. Yes, I am suggesting that you use Express for this – and that’s because Express is designed for that! It’s totally normal to have one sever for your API and one for your web app(s). In fact, it’s encouraged: for a good read check out the Twelve-factor App

However! I don’t want to leave you hanging, so I’ll show you a very simple way to allow developers to obtain an API key, but only after they have verified their email address.

In order to enable email verification, please log into the Stormpath Admin Console and visit the Workflows section of the directory in your default “My Application”.

After the workflow is enabled, we will implement another route handler to leverage two more Stormpath filters. Create them below your other filter invocations:

var newAccountFilter = stormpathFilters.newAccountFilter();
var accountVerificationFilter = stormpathFilters.accountVerificationFilter();

Then we’ll use those filters with two new routes:'/accounts',newAccountFilter);

These routes will allow users to post their email, password, and other required user information to create an account. To make that easier, let’s create a quick command line tool developers can use to register for our API service.

Command-line Registration Tool

Since we’re not building a full-blown web app for handling account creation, we’ll create a small command-line utility instead. I’ve created an example of how you might do this, but it’s a pretty large file so I won’t inline it here. You can get the source here: register.js source

This register.js file leverages the prompt library to do the following:

Switch back to the things-api directory and copy the source of that file into a register.js file in your client module. Then modify the package.json for your client library to have this configuration:

  "bin" : { "register" : "./register.js" }

With that configuration, you can tell your developers to execute this command after they’ve installed your client module in their application:

$ ./node_modules/.bin/register

If you haven’t published your module to NPM you can still try this CLI tool by running it from inside the things-api directory:

$ node register.js

Doing this will bring up the registration CLI:

Registration CLI tool

Stormpath will send an email to the given email address, with a link that will retrieve an API Key Pair. You want to customize the email message to point to the /verifyAccount URL that we created in the API server. You can configure the email in the Stormpath console: “My Application” —> Directory —> Workflows. Then configure the message like this, making sure you set the Base URL to your local development app:

Email Template

With that email template, your API users will receive a confirmation email:

Email Message

Because we configured the email template to point to our server, when the user clicks on the email link they will land on our Restify server, where the Stormpath filter will kick in. It will verify that this link was actually generated by Stormpath, and if valid it will create an API key pair for the user and show it to them:

Api Key Pair

Your developer can then take those keys and start using them with your client library. Success!

The Proverbial “Me”

One last piece of awesome: the /me route. This very common route in APIs lets the consumer know who they are currently authenticated as.

Setting this up in the server is incredibly simple. The stormpath-restify library will attach the Stormpath account to req.account if the user is successfully authenticated. Thus, we just need a simple route handler:


Adding a convenience method to the client library is equally simple because it’s just a simple get request:

myOauthClient.getCurrentUser = myOauthClient.get.bind(myOauthClient,'/me');

That allows our developers to do this in their application:

client.getCurrentUser(function(err,user) {
    console.log('Who am I?');
    console.log(user.fullName + ' (' + + ')');

Wrap It Up

And with that… we have built a fully-functional API, complete with registration and API Key distribution – go forth and build more API!

I hope you have learned a bit about the Oauth2 client credentials flow. I also hope I’ve shown you how easy it is to use Stormpath to implement that flow in your API, so you can get on with what you really want to do – writing your API endpoints.

If you’d like to learn more about our Restify integration please head over to stormpath-restify on Github.

If you want to dig even deeper into Stormpath you should check out the Stormpath Documentation as well as the Stormpath Node.JS SDK

For help with all Stormpath libraries and integrations, just hit us up on We’re happy to help!

Julian BondOn this day, it's especially important to follow the directions on boxes of matches. "Keep Dry and Away... [Technorati links]

November 05, 2014 09:40 AM
On this day, it's especially important to follow the directions on boxes of matches. "Keep Dry and Away From Children". However if you are taking your little darlings to bonfire night, you should also heed the advice from Scarfolk Council. "Always Light Children At Arms Length"
 "Arms Length" Safety Poster (Bonfire Night Part 1) »
When Scarfolk Council issued the poster below in 1972, it was met with complaints from parents, teachers and arsonists. While the poster does offer the safety guideline of an 'arms length', it does not specify how long that a...

[from: Google+ Posts]
November 04, 2014

Kuppinger ColeOne Identity for All: Successfully Converging Digital and Physical Access [Technorati links]

November 04, 2014 11:09 PM
In KuppingerCole Podcasts

Imagine you could use just one card to access your company building and to authenticate to your computer. Imagine you had only one process for all access, instead of having to queue at the gate waiting for new cards to be issued and having to call the helpdesk because the system access you requested still isn’t granted. A system that integrates digital and physical access can make your authentication stronger and provide you with new options, by reusing the same card for all access infrastruc...

Watch online

Kuppinger ColeKuppingerCole Analysts' View on Connected Enterprise [Technorati links]

November 04, 2014 10:59 PM
In KuppingerCole

The digitalization of businesses has created an imperative for change that cannot be resisted. IT has to support fundamental organizational change. IT must become a business enabler, rather than obstructing change.

However, enabling new forms of digital business requires that IT take a fundamentally different role. In fact, IT is not about technology anymore, it must focus on understanding and fostering the digital business. It must enable the shift to new business models and...

Kantara InitiativeWhat is IoT without Identity? [Technorati links]

November 04, 2014 02:45 PM

What is IoT with out Identity? IoT without identity is just oT.

IoT offers a world of promise that is (partially) built upon leveraging the human-to-device connection for new opportunities. Without Identity, IoT is still the enabler of M2M communications, but perhaps with less impact toward transforming our connected lives. IoT+Identity represents a powerful equation that brings identity, security, software, hardware, policy and privacy experts to the same table.

Identity services are the key that unlocks the world if IoT for human interaction in all walks of life.  We see many opportunities for IoT to improve lives ranging from devices that monitor our health and quality of sleep, to those that help us to manage our homes, or cars. To fully leverage the beneficial powers of IoT vendors need to know that IoT+Identity enabled products and services won’t fail and severely damage their brand reputation. Users need to know these new tools respect their preferences. See “I’m Terrified of My New TV: Why I’m Scared to Turn This Thing On — And You’d Be, Too.” Collaboration is needed to address the privacy and security requirements of consumers, enterprise, and governments to develop scalable programs for verified assurance of technologies and policies that will transform our connected life.

Serious access management challenges are approaching.  The numbers of relationships between people, entities, and things will be larger in magnitudes of order. How will users manage their connected lives? How will they set preferences for data sharing permissions? The sheer number of devices, connections, and relationships presents unique opportunities and challenges.  At the low end of the scale, the number of devices and connections will be in the billions.  User Managed Access provides an open standard approach to help empower and engage users for the management of resource access and sharing.

The multitude of sensors and apps that are gathering and communicating personal information magnifies security and privacy risks.  Personal data can fall in to the wrong hands, be sold without consent, be leveraged in ways the user did not imagine, like having one’s car insurance rates rise due to recorded driving habits. Users will need to know their personal data can be managed and properly protected for privacy. Smart physical spaces will become more and more prevalent. Legislation is developing around proper notice and consent practices both on-line and in physical spaces. The Kantara Consent and Information Sharing WG is developing a number of solutions to address these issues and to develop a more useable form of consent.

Interoperability of IoT+Identity will also have challenges.  When device identifiers are not standardized discovery mechanisms are but one of the challenges to solve. At Kantara, the IDentities of Things WG is hard at work to deliver an industry analysis of the current landscape opportunities, challenges, and gaps to address.

Identity Relationship Management (IRM) focuses on building relationships using identity technologies, practices, and techniques. IRM is especially powerful to leverage the IoT+Identity connection. Kantara is the home of IRM development working to connect the components that are necessary to unleash the power of IoT+Identity. This week we’re at the Europe IRMSummit produced by ForgeRock. The venue is referenced by National Geographic as the 3rd top garden park in the world (See Powerscourt #3). We are surrounded by a picturesque thick fog and forest which works wonders for keeping wandering identity and hardware experts in one place!

The IRM Summit has 3 tracks.

  1. Identity Relationship Management (IRM) – building relationships using identity service technologies, techniques, and practices.
    See the Laws of Relationships (a work in progress) to get a flavour.
  2. Digital Citizen – identity technology as an enabler of innovative and dynamic Government and civil services.
  3. Digital Transformation – identity technologies that transform the way we do business and our lives.

Kantara Initiative members are hard at work innovating IRM solutions and practices for businesses, governments, and for our connected lives.  Building on the concepts of IRM, Kantara Initiative focuses on the idea of a “connected life.”  Developing open standards, innovations, pilots, and programs is the key to accelerating the transformation our digital-to-human world in a way that respects users.

To power the IoT+Identity connection we’ll need:

Kantara Initiative is home of IRM where you can connect your priorities to a broader global expertise. Join Kantara now to network among leaders to shape identity today and toward the IoT enabled future. Join. Innovate. Trust.

From the desk of Joni Brennan
Executive Director, Kantara Initiative

Julian BondSome fallout from the IPCC report on climate change [Technorati links]

November 04, 2014 12:22 PM
Some fallout from the IPCC report on climate change

There's a bit of received wisdom I've been seeing stated as fact as people comment on the IPCC report. "You can have growth in global GDP without growth in the consumption of energy and resources". It's a nice fantasy and has an element of truthiness about it, because obviously increased efficiency and productivity means producing more for less. Except that rises in GDP have ALWAYS resulted in increasing consumption of energy and resources. So where's the counter example? And there's an underlying assumption that continued 3% compound growth is desirable and necessary. Is that true?

There's some interesting lines of macro-economic research here. In each case it needs to provide not just answers, ideas and proofs by example but routes to get there. And solutions need to be appropriate for global macro-economics, not just a tiny self sufficient community in the middle of Wales.

1) Can you have improving quality of life with zero or negative GDP growth?

2) Can you have increasing GDP without corresponding increases in energy and resource consumption?

3) Can we reduce our dependence on borrowing from the future via debt to fund growth in GDP?

4) Can we control the pollution side effects of growth in GDP?

It's not enough to do like Paul Krugman and just use homilies and parables about making shipping more efficient by sailing slower or such like. If it's even true, that's a local solution when answers need to be global models.

All the Limits to Growth models show hockey-stick style exponential growth leading to a brief peak followed by a catastrophic correction. More technical fixes and productivity improvements seem to lead to making the same graph more extreme; faster growth, a higher peak, a more dramatic correction. So I think this leads to the most important question.

What can we do now to create a soft landing as we transition from a growth state to a sustainable state? And that's both personally and as a global society.

If that's not hard enough. Then bear in mind what might be required to force global society to follow the optimum path when most of the actors are ill-informed and are treating the game as an iterated prisoner's dilemma where their own personal short-term gain is all that matters. And there's a lot of them spread all over the world.
[from: Google+ Posts]

Radovan Semančík - nLightWhat can we really do about the insider threat? [Technorati links]

November 04, 2014 10:46 AM

The "insider" has been indicated as a the most severe security threat for decades. Almost every security study states that the insiders are among the highest risk in almost any organization. Employees, contractors, support engineers - they have straightforward access to the assets, they know the environment and they are in the best position to work around any security controls that are in place. Therefore it is understandable that the insider threat is consistently placed among the highest risks.

But what has the security industry really done to mitigate this threat? Firewall, VPN, IDS and cryptography is of no help here. 2-factor authentication also does not help. The insiders already have the access they need therefore securing such the access is not going to help. There is not much that the traditional information security can do about the insider threat. So, we have threat that is consistently rated among the top risks and we have nothing to do about it?

The heart of the problem is in the assets that we are trying to protect. The data are stored inside applications. Typically the data of all sensitivity levels are stored in the same application. Therefore network-based security techniques are almost powerless. Network security can usually control only whether a user has access to application or not. But it is almost impossible to discriminate the individual parts of the application which the user is allowed to access - let alone individual assets. Network perimeter is long gone. Therefore there is no longer even a place where to place network security devices as the data move between cloud applications and mobile devices. This is further complicated by the defense in depth approach. Significant part of the internal nework communication is encrypted. Therefore there is very little an Intrusion Detection System (IDS) can do because it simply does not see inside the encrypted stream. Network security is just not going to do it.

Can application security help? Enterprises usually have quite a strict requirements for application security. Each application has to have proper authentication, authorization, policies, RBAC, ... you name it. If we secure the application then we also secure the assets, right? No. Not really. This approach might work in early 1990s when applications were isolated. But now the applications are integrated. Approaches such as Service-Oriented Architecture (SOA) bring in industrial-scale integration. The assets almost freely travel from application to application. There are even composite applications that are just automated processes that live somewhere "between applications" in the integration layer. Therefore it is no longer enough to secure a couple of sensitive applications. All the applications, application infrastructure and integration layers needs to be secured as well.

As every security officer knows there is an aspect which is much more important than high security. It is consistent security. It makes no sense to have high security in one application while other application that works with same data is left unsecured. The security policies must be applied consistently across all applications. And this cannot be done in each of the application individually as this would be daunting and error-prone task. This has to be automated. As applications are integrated then also the security needs to be integrated. If it is not integrated then the security efficiently disappears.

Identity Management (IDM) systems are designed to integrate security policies across applications and infrastructure. The IDM systems are the only components that can see inside all the applications. The IDM system can make sure that the RBAC and SoD policies are applied consistently in all the applications. It can make sure that the accounts are deleted or disabled on time. As the IDM system can correlate data in many applications it can check for illegal accounts (e.g. accounts without a legal owner or sponsor).

IDM systems are essential. It is perhaps not possible to implement reasonable information security policy without it. However the IDM technology has a very bad reputation. It is considered to be very expensive and never-ending project. And rightfully so. The combination of inadequate products, vendor hype and naive deployment methods contributed to a huge number of IDM project failures in 2000s. The Identity and Access Management (IAM) projects ruined many security budgets. Luckily this first-generation IDM craze is drawing to an end. The second-generation products of 2010s are much more practical. They are lighter, open and much less expensive. Iterative and lean IDM deployments are finally possible.

Identity management must be an integral part of the security program. There is no question about that. Any security program is shamefully incomplete without the IDM part. The financial reasons to exclude IDM from the security program are gone now. Second generation of IDM systems finally delivers what the first generation has promised.

(Reposted from
November 03, 2014

Axel NennkerX-Auto-Login at Google [Technorati links]

November 03, 2014 12:57 PM
Below you can find evidence that Google is using the X-Auto-Login header in production.
Please see my other post for context:
 I am using "wget" to get gmail web page and the HTTP response contains the X-Auto-Login header.

I think that Google should standardize this.
Currently Google is using OpenID2 here but it is probably ease to standardize this with OpenID Connect.

ignisvulpis@namenlos:~/mozilla-central$ wget -S --user-agent="Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36"
--2014-11-03 12:23:50--
Connecting to connected.
Proxy request sent, awaiting response...
HTTP/1.1 302 Moved Temporarily
Content-Type: text/html; charset=UTF-8
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Date: Mon, 03 Nov 2014 11:23:51 GMT
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Server: GSE
Alternate-Protocol: 443:quic,p=0.01
Connection: close
Location: [following]
--2014-11-03 12:23:51--
Connecting to connected.
Proxy request sent, awaiting response...
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Strict-Transport-Security: max-age=10893354; includeSubDomains
Set-Cookie: GAPS=1:lAGQAL021CeF4UofSLjbzRnvJw_Eqw:256mW0v3ZoeLVjLo;Path=/;Expires=Wed, 02-Nov-2016 11:23:51 GMT;Secure;HttpOnly;Priority=HIGH
Set-Cookie: GALX=xATUIfBPIN4;Path=/;Secure
X-Frame-Options: DENY
Cache-control: no-cache, no-store
Pragma: no-cache
Expires: Mon, 01-Jan-1990 00:00:00 GMT
  Transfer-Encoding: chunked
Date: Mon, 03 Nov 2014 11:23:51 GMT
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Server: GSE
Alternate-Protocol: 443:quic,p=0.01
Connection: close
Length: unspecified [text/html]

2014-11-03 12:23:51 (1,44 MB/s) - ‘mail’ saved [70172]

November 02, 2014

Julian BondThe Copenhagen IPCC report is released today. [Technorati links]

November 02, 2014 10:50 AM
The Copenhagen IPCC report is released today.

The article contains these two conflicting comments. 

"The lowest cost route to stopping dangerous warming would be for emissions to peak by 2020 – an extremely challenging goal – and then fall to zero later this century."


"The report also makes clear that carbon emissions, mainly from burning coal, oil and gas, are currently rising to record levels, not falling." 

I'm afraid that looks to this bear of little brain like we're all doomed. Mankind will continue business as usual, with accelerating carbon emissions until either resource limits or pollution (in the form of global warming, smog or whatever) put a hard stop to it. The question is when, not if.

I've no doubt people will latch onto the uncertainties, or to phrases like this. "Tackling climate change need only trim economic growth rates by a tiny fraction, the IPCC states, and may actually improve growth by providing other benefits, such as cutting health-damaging air pollution. And they'll try to say that it's not that bad really and can be dealt with. I'm afraid though that I simply don't see how China, India, USA and others will ever want to slow down until nature forces them to. 
 IPCC: rapid carbon emission cuts vital to stop 'severe' impact of climate change »
Most important assessment of global warming yet warns carbon emissions must be cut sharply and soon, but UN’s IPCC says solutions are available and affordable

[from: Google+ Posts]
November 01, 2014

Anil JohnIdentity Establishment, Management and Services [Technorati links]

November 01, 2014 08:05 PM

Delivering high value digital services to a particular individual requires knowing who that individual is with a high degree of assurance. That identity assurance in turn has dependencies on the sources used to validate the information and the techniques used to verify that the validated information belongs to the person claiming it. All too often, we focus on verification techniques while neglecting the whole chain of trust that goes into validation.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

The opinions expressed here are my own and do not represent my employer’s view in any way.

October 31, 2014

WAYF NewsRoyal Society of Chemistry now a WAYF service [Technorati links]

October 31, 2014 12:29 PM

Online resources from the chemistry publisher Royal Society of Chemistry (RSC) can now be accessed through WAYF. Institutions connected to WAYF and subscribing to the RSC services must write to to have their WAYF access enabled.

Kuppinger Cole11.03.2015: Identity Management Crash Course [Technorati links]

October 31, 2014 10:43 AM
In KuppingerCole

An overall view on IAM/IAG and the various subtopics - define your own "big picture" for your future IAM infrastructure.

Kuppinger Cole12.03.2015: Mastering Cyber Defense 2020. Enabling Business Transformation. [Technorati links]

October 31, 2014 10:26 AM
In KuppingerCole

Your business is changing. The IoT, mobile users, tight interaction with customers, mobility, etc.: Information Security has to enable this transformation by mitigating security risks.
October 30, 2014

Matt Flynn - NetVisionA Few Thoughts on Privacy in the Age of Social Media [Technorati links]

October 30, 2014 02:02 PM
Everyone already knows there are privacy issues related to social media and new technologies. Non-tech-oriented friends and family members often ask me questions about whether they should avoid Facebook messenger or flashlight apps. Or whether it's OK to use credit cards online in spite of recent breach headlines. The mainstream media writes articles about leaked personal photos and the Snappening. So, it's out there. We all know. We know there are bad people out there who will attempt to hack their way into our personal data. But, that's only a small part of the story.

For those who haven't quite realized it, there's no such thing as a free service. Businesses exist to generate returns on investment capital. Some have said about Social Media, "if you can't tell what the product is, it's probably you." To be fair, most of us are aware that Facebook and Twitter will monetize via advertising of some kind. And yes, it may be personalized based on what we like or retweet. But, I'm not sure we fully understand the extent to which this personal, potentially sensitive, information is being productized.

Here are a few examples of what I mean:

Advanced Profiling

I recently viewed a product marketing video targeted to communications service providers. It describes that massive adoption of mobile devices and broadband connections suggesting that by next year there will be 7.7 billion mobile phones in use with 15 billion connections globally. And that "All of these systems produce an amazing amount of customer data" to the tune of 40TB per day; only 3% of which is transformed into revenue. The rest isn't monetized. (Gasp!) The pitch is that by better profiling customers, telcos can improve their ability to monetize that data. The thing that struck me was the extent of the profiling.

As seen in the screen capture, the user profile presented extends beyond the telco services acquired or service usage patterns into the detailed information that flows through the system. The telco builds a very personal profile using information such as favorite sports teams, life events, contacts, location, favorite apps, etc. And we should assume that favorite sports team could easily be religious beliefs, political affiliations, or sexual interests.

IBM and Twitter

On October 29, IBM and Twitter announced a new relationship that enables enterprises to "incorporate Twitter data into their decision-making." In the announcement, Twitter describes itself as "an enormous public archive of human thought that captures the ideas, opinions and debates taking place around the world on almost any topic at any moment in time." And now all of those thoughts, ideas, and opinions are available for purchase through a partnership with IBM.

I'm not knocking Twitter or IBM. The technology behind these capabilities is fascinating and impressive. And perhaps Twitter users allow their data to be used in these ways by accepting the Terms of Use. But, it feels a lot more invasive to essentially provide any third party with a siphon into the massive data that is our Twitter accounts than it would be to, for example, insert a sponsored tweet into my feed that may be selected based on which accounts I follow or keywords I've tweeted.

Instagram Users and Facebook

I recently opened Facebook to see an updated list of People I may know. Most Facebook users are familiar with the feature. It can be an easy way to locate old friends or people who recently joined the network. But something was different. The list was heavily comprised of people who I sort of recognize but have never known personally.

I realized that Facebook was trying to connect me with many of the people behind the accounts I follow on Instagram. Many of these people don't use their real names, talk about their work, or discuss personal family matters on Instagram. They're photographers sharing photos. Essentially, they're artists sharing their art with anyone who wants to take a look. And it feels like a safe way to share.

But now I'm looking at a profile of someone I knew previously only as "Ty_Chi the landscape photographer" and I can now see that he is actually Tyson Kendrick, retail manager from Chicago, father of three girls and a boy. Facebook is telling me more than Mr. Kendrick wanted to share. And I'm looking at Richard Thompson, who's a marketing specialist for one of the brands I follow. I guess Facebook knows the real people behind brand accounts too. It started feeling pretty creepy.

What does it all mean?

Monetization of social media goes way beyond targeted advertising. Businesses are reaching deep into any available data to make connections or discover insights that produce better returns. Service providers and social media platforms may share customer details with each other or with third parties to improve their own bottom lines. And the more creative they get, the more our sense of privacy erodes.

What I've outlined here extends only slightly beyond what I think most people expect. But, we should collectively consider how far this will all go. If companies will make major financial decisions based on Twitter user activity, will there be well-funded campaigns to change user behavior on Social Media platforms? Will the free-flow exchange of ideas and opinions become more heavily and intentionally influenced?

The sharing/exchanging of users' personal data is becoming institutionalized. It's not a corner case of hackers breaking in. It's a systemic business practice that will grow, evolve, and expand.

I have no recipe to avoid what's coming. I have no suggestions for users looking to hold onto to the last threads of their privacy. I just think it's worth thinking critically about how our data may be used and what that may mean for us in years to come.

Vittorio Bertocci - MicrosoftADAL .NET v3 Preview: PCL, Xamarin iOS and Android Support [Technorati links]

October 30, 2014 11:00 AM

It’s again preview time for ADAL .NET! I can’t believe we are already working on version 3 – this pace would have been unthinkable just few years back, and yet today it is barely enough to keep up with the new possibilities that the tech landscape offers.

Version 1 and 2 have been enjoying great adoption level. While I am happy to report that overall it would appear you guys are very happy with it, of course we still have significant margin for improvement. This first preview does not address all the fundamental areas we want to work on (hint, hint: expect breaking changes before we’ll get to GA) but does tackle one of the greatest hits in the feature requests: the ability to use ADAL with PCL technology, and in particular with Xamarin.
If you are not familiar with those, here there’s how to wrap your head around what we are aiming at here.

Say that you are working on a native application that makes use of some Microsoft API (Office 365 Exchange and/or SharePoint, Directory Graph, Azure management, etc) and/or any API protected by AD. Say that you want versions of your application for .NET desktop, iOS, Android, Windows Store and Windows Phone 8.1.
With ADAL .NET v3 you will be able to write (in C#) a single class library containing all the authentication code and API calls. Done that, you will simply reference that class library and the ADAL .NET v3 NuGet from all your platform specific projects. Bam. Now you’ll just need to code the UX for every project, as the authentication code will simply be reused across all platforms.

Is your mind blown yet? Mine still is! Every time I see the debugger hit the exact same breakpoint regardless of the platform I am working with, I find myself in a marveling stupor Smile

This post will give you a quick introduction to the new approach this preview pursues. I won’t go too deep into details, mostly because those are super early bits: you should expect rougher edges than you’ve experienced in our past previews, as we really want to have an opportunity to hear from you before we start chiseling. And now, without further ado…

The New Architecture

ADAL v3 maintains the usual primitives that (at least, that’s what you tell us Smile) you already know and love: AuthenticationContext, AcquireToken, AuthenticationResult. From the code perspective, ADAL v3 preview 1 will hold very few surprises.
What changed is how the library itself is factored. At this point the library contains the following:

On each platform, the full ADAL set of features is an emerging property of the combination of ADAL.PCL and ADAL.PCL.<Platform>. Oversimplifying a bit, the experience goes as follows:

  1. Say that your project is a Xamarin classic API iOS one. You add a reference to the ADAL v3 NuGet from your project.  That adds in your references both ADAL.PCL (Microsoft.IdentityModel.Clients.ActiveDirectory) and ADAL.PCL.iOS (Microsoft.IdentityModel.Clients.ActiveDirectory.Platform). The only exception is if your project is itself a PCL, in which case you’ll only get ADAL.PCL.
  2. Code normally against the usual OM, which comes from ADAL.PCL.
  3. When you run your project, at runtime ADAL executes the token request logic and, once it gets to the point in which it needs to leverage a platform specific component, it injects the logic from the platform specific assembly to perform the task requested: pop out a browser surface, dispatch a message to the app with the results, and so on.

How this is even possible is, in my opinion, a thing of beauty. We have Afshin Sepehri to thank for this neat trick, and in fact for the entire architecture!
Here there’s how it works. You will notice that AcquireTokenAsync accepts a new parameter, of type IAuthorizationParameter. This parameter passes to ADAL a reference to the parent component that hosts the logic requesting a token. Every platform will pass a different type – on the base of that, ADAL can load the correct assembly and inject it as a dynamic dependency. Neat-o!

As I mentioned in the intro to the post, if you are targeting more than one platform at once you can carry this approach further: instead of calling ADAL APIs directly from the platform specific project, you can gather all of that in your own shared PCL that exposes only business logic that makes sense for your app and none of the token acquisition logic. Remember, you’ll still need to reference the ADAL NuGet from every platform specific project for the aforementioned trick to work – but this will be able to have a 100% shared component across all projects.

To make this more practical, let me refer to our new sample here.

The sample represents a super simply Directory Graph API client, which you can use for looking up a user by its email alias.

The solution follows the structure we discussed, one PCL and many platform-specific projects:


The PCL holds a reference to ADAL.PCL:


As it is a very simple scenario, the PCL exposes a single method:

// …
public static async Task<List<User>> SearchByAlias(string alias, IAuthorizationParameters parent) {
    AuthenticationResult authResult = null;
    JObject jResult = null;
    List<User> results = new List<User>();

        AuthenticationContext authContext = new AuthenticationContext(commonAuthority);
        if (authContext.TokenCache.ReadItems().Count() > 0)
            authContext = new AuthenticationContext(authContext.TokenCache.ReadItems().First().Authority);
        authResult = await authContext.AcquireTokenAsync(graphResourceUri, clientId, returnUri, parent);
// …

The code looks up the cache to see if there is already an entry, in which case we’ll use its tenant as authority, otherwise falls back to common. Apart from the parent parameter, it’s business as usual.

To see how this is used, let’s pick a random project: say the iOS one.


Here the references include our own PCL, references to both ADAL.PCL and ADAL.PCL.iOS, and the usual Xamarin stuff.

All the interesting logic is in DirSearchClient_iOSViewController:

public override void ViewDidLoad()

    // Perform any additional setup after loading the view, typically from a nib.
    SearchButton.TouchUpInside += async (object sender, EventArgs e) =>
        if (string.IsNullOrEmpty(SearchTermText.Text))
            // UX stuff cut for clarity

        List<User> results = await DirectorySearcher.SearchByAlias(SearchTermText.Text, new AuthorizationParameters(this));
        if (results.Count == 0) 
            // ...more stuff

The interesting bit is the highlighted. That’s the only code you need to have in the iOS project to take advantage of the dir searcher features, you don’t even need to know that ADAL is in there! Smile

Running the projects in the solution is a pretty amazing experience: an incredible reach with so little code… and in my favorite language, c#!!!


In fact, there is something I find even more mind blowing. You can actually take the solution, copy it to a Mac, open it with Xamarni Studio, and you’ll be able to edit and debug the iOS and Android solutions directly from there! See screenshot on my Mac below. I am sure that if you worked a lot with Xamarin you might be blasé about this, but being still new to it… this is awesome Open-mouthed smile

Screen Shot 2014-10-25 at 1.01.57 AM


Did I mention this is rougher than our usual previews? This is meant to unblock a very specific scenario, use of interactive authentication in multi-platform solutions, so that we can gather early feedback and course-correct as appropriate. Everything outside of that scenario might fail, expect NotImplemented to pop up often if you delve from that path. Important features like persistent cache or silent token auth are missing. The OM *WILL* change, hence if you write code against it now please know that the next preview will very likely break it.

We also have limitations on the target platforms : ADA can’t target Xamarin Forms or the new project types yet, it does not work yet with ASP.NET vNext, and so on. We’ll iterate and make things better, but you need to expect an interesting ride Smile

Also, very important: this does not change our existing strategy for native libraries. ADAL .NET v3 will extend the scope of what a .NET developer can do, but this does not change at all our commitment to support developers who work natively on iOS and Android with Objective-C and Java. In fact, parity is not even a goal here. All the libraries share our general approach (Open source, rapid iterations, etc) but that’s about it. If you are an Objective-C or a Java developer don’t worry, we still got you covered! Smile


I hope you’re as excited as I am about this new development in ADAL-land. A good starting point is this sample. I am planning to show this at my TechEd EU session today, hence shortly we should also have a video. The more feedback you give us, the faster we’ll make progress… I can’t wait to read your comments Smile

Happy coding!

Kuppinger Cole25.11.2014: From Privacy Impact Assessments (PIA) to Information Risk Assessments [Technorati links]

October 30, 2014 09:20 AM
In KuppingerCole

Privacy Impact Assessments (PIAs) are already or soon will be a legal requirement in many jurisdictions or sectors (i.e. payment cards sector). They provide a great help for institutions to focus on privacy and data flows and therefore provide an interesting entry point into an overall discussion on Information and identity-related risks. In this webinar, KuppingerCole´s fellow analysts Scott David and Karsten Kinast will discuss with you about PIAs as a natural starting point for a broader...