April 23, 2014

Kevin MarksFragmentions - linking to any text [Technorati links]

April 23, 2014 06:38 PM

A couple of weeks ago, I went to a w3c workshop about annotations on the web. It was an interesting day, hearing from academics, implementers, archivists and publishers about the ways they want to annotate things on the web, in the world, and in libraries. The more I listened, the more I realised that this was what the web is about. Each page that links to another one is an annotation on it.

Tim Berners-Lee's invention of the URL was a brilliant generalisation that means we can refer to anything, anywhere. But it has had a few problems over time. The original "Cool URLs don't change" has given way to Tim's "eventually every URL ends up as a porn site".

Instead of using URLs, Google's huge success means that searching for text can be more robust than linking. If I want to point you to Tom Stoppard's quote from The Real Thing:

I don’t think writers are sacred, but words are. They deserve respect. If you get the right ones in the right order, you can nudge the world a little or make a poem which children will speak for you when you’re dead.

the search link is more resilient than linking to Mark Pilgrim's deleted post about it, which I linked to in 2011.

Another problem is that linking in HTML is defined to address pages as a whole, or fragments within them, but only if the fragments are marked up as an id on an element. I can link to a blog post within a page by using the link:


because the page contains markup:

<div class="post-body entry-content" id="post-body-90336631" >

But to do that I had to go and inspect the HTML and find the id, and make a link specially, by hand.

What if instead we combined these two ideas:

I've named these "fragmentions"

To tell these apart from an id link, I suggest using a double hash - ## for the fragment, and then words that identify the text. For example:


means "go to that page and find the words 'annotate the web' and scroll to show them"

If you click the link, you'll see that it works. That's because when I mentioned this idea in the indiewebcamp IRC channel, Jonathan Neal wrote a script to implement this, and I added it to my blog and to kevinmarks.com. You can add it to your site too.

However, we can't get every site to add this script. So, Jonathan also made a Chrome Extension so that these links will work on any site if you're running Chrome. (They degrade safely to linking to the page on other browsers).

So, try it out. Contribute to the discussion on the Indiewebcamp Fragmentions page, or annotate this page by linking to it with a fragmention from your own blog or website.

Maybe we can persuade browser writers that fragmentions should be included everywhere.

Originally posted on kevinmarks.com

Kevin MarksFragmentions for Poets [Technorati links]

April 23, 2014 06:37 PM

Since the original fragmention implementation and discussion, we've been trying it out in various contexts, including any wordpress blog, Shakespeare's complete works and even the indiewebcamp wiki. Also there have been a lot of reactions and suggestions. Some of these occur often enough that I thought I'd write some responses down.

What if the linked-to text changes?

Several people pointed to the New York Times Emphasis project, which builds IDs from initial letters of sentences in a paragraph to provide some degree of resilience against the linked-to text being changed. It also tries using edit distances if it can't find the text.

Whether you still want to link to changed text is a tricky problem - if it has completely been removed, then the annotation or point of linking may have gone (pointing out a typo, or misstatement). Even a small change (adding the word 'not' for example) can mean that the point of linking has changed, so my first thought is that changing the text breaking the link can be reasonable.

If you want some fuzzy matching to go on, having more of the linked-to text in the fragment can only help the linked-to page identify where in the text was intended. Indeed, if enough is included, you could show the difference between what was linked to and what is there now.

What if the linked-to text occurs more than once?

By default, go to the first instance. If you want a different link, use more words to create a unique reference. While there have been proposals to link to the nth occurrence using more complex syntax, I don't think this is actually a natural choice, and likely to be more fragile. The NYT Emphasis tool mentioned above switched from a nth sentence type model to a content dependent one for this reason; fragmentions simplify and extend this idea.

I have tried to come up with a use case that fits this goal - the closest I can think of is referring to a particular repetition of a line of poetry, for example in a villanelle.

I can link to the 3rd line of the 1st verse by citing it in full. If I wanted the final line, linking to it and the line before would work. I think this is clear, though linking to night & Rage would be enough.

The only reason I can think of to link to specific lines would be to discuss them in the context of surrounding lines, so I think this works adequately.

If a tool is made to allow readers to construct a link to a specific phrase, indicating that that phrase is not unique to encourage them to choose a longer phrase may be worth it.

Could you combine an id and a fragmention?

A link of form #id##some+words has been suggested, but again I'm not sure I see the utility. This is the nth occurrence idea in a different guise. It combines two addressing models in one, making it harder to construct and more fragile to resolve.

The other thought, based on closer reading of the HTML5 spec ID attribute:

The id attribute specifies its element's unique identifier (ID). [DOM]

The value must be unique amongst all the IDs in the element's home subtree and must contain at least one character. The value must not contain any space characters.

There are no other restrictions on what form an ID can take; in particular, IDs can consist of just digits, start with a digit, start with an underscore, consist of just punctuation, etc

is that we may not need the ## (which technically makes an invalid URL) at all.

If an HTML5 id cannot contain a space, then a fragment that contains one like #two+words can never match an id (as id="two words" would be invalid). If it can't be an id it should be treated as a fragmention.

If you really want a one-word fragmention a trailing space like #word+ could be used.

This means that the idea of fragmention could be simplified: if a fragment contains a space it MUST be a fragmention, and should be searched for in the text. If it doesn't match any IDs in the page, it COULD be a fragmention and should be searched for in the text anyway.

Fragmentions become a fallback to be used when an id can't be found.

Do not go gentle into that good night

Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.

Though wise men at their end know dark is right,
Because their words had forked no lightning they
Do not go gentle into that good night.

Good men, the last wave by, crying how bright
Their frail deeds might have danced in a green bay,
Rage, rage against the dying of the light.

Wild men who caught and sang the sun in flight,
And learn, too late, they grieved it on its way,
Do not go gentle into that good night.

Grave men, near death, who see with blinding sight
Blind eyes could blaze like meteors and be gay,
Rage, rage against the dying of the light.

And you, my father, there on the sad height,
Curse, bless, me now with your fierce tears, I pray.
Do not go gentle into that good night.
Rage, rage against the dying of the light.

by Dylan Thomas

hear him read it

Originally on my own website

KatasoftBuild a Killer Node.js Client for Your REST+JSON API [Technorati links]

April 23, 2014 06:05 PM

If you’re building a Node API client, the design of your client can make or break adoption.

Our CTO Les Hazlewood recently built the Stormpath Node SDK and put what he learned into this presentation on Building A Node.js Client for REST+JSON APIs. We get a ton of questions on how we design SDKs at Stormpath, so we put the video on Youtube.

In the video, Les covers:

If you want to explore our Node client further, here’s the getting started code from the presentation:

$ git clone https://github.com/stormpath/stormpath-sdk-node.git
$ cd stormpath-sdk-node
$ npm install
$ grunt

We’ll also be posting full writeup blogs soon – Stay tuned!

Thanks to the BayNode Meetup for hosting this talk.

April 22, 2014

Kuppinger ColeData retention directive in Europe considered illegal by EU court [Technorati links]

April 22, 2014 10:16 PM
In Sachar Paulus

Have you seen this WSJ article?

This is great news for privacy, human rights and a profound public security based on individual freedom: nations can no longer require IT and telecom companies to store communication data about all customers and communication partners – at least there need to be clear indications for the need to store that data and clearly defined, very restrictive rules on doing that.

For some time now, security organizations claim that they can only cope with the new risks through internet and information technology by having more or less unlimited access to the user data. The primary idea is that keeping this data in the first place makes it easier to have evidences on communication and its metadata. But it may also be used for creating profiles and thus prejudging innocent people. And recent history has shown that it is not only possible, but that security agencies actually proactively act on this.

The reasoning is wrong in the first place, anyway. To have access to profiling information does neither support better prevention of crime nor does it help solving it. Crime will always exist, and those committing crimes will always try to use means by which the risk of being tracked is as low as possible. Consequently, security organizations will only be successful if they do not uncover the tracking means and technologies – but exactly this is the very same risk of creating prejudice and destroying social freedom.

Many European nations now have to revisit their legal frameworks. Since Europe by now is one of the largest legal ecosystems, this will have a significant impact on individual information security and freedom – at least within Europe. It will be interesting to observe whether it also influences other regions.

This will, in turn, have some impact on companies’ IT security architecture on the long run. Those companies that have started to track their employee’s digital activities for security prevention and did bet on such a practice being allowed or even supported for nationwide cybersecurity, need to rethink this approach. Many solution providers have emerged in the last years, offering profiling information as used by security agencies, these will need either to step out from Europe or have additional, privacy/friendly products in their basket.

Note that this is not the end of profiling end users (and especially security organizations shall listen carefully): most consumers actively offer more than enough data to track and trace them across the internet – one only needs to go look at these data, e.g. with Google and other ad companies. This area is not in scope of the EU court decision. And just as with communication metadata: you will with high probability not find the REAL bad guys there…


Bill Nelson - Easy IdentityUnderstanding OpenAM and OpenDJ Account Lockout Behaviors [Technorati links]

April 22, 2014 07:36 PM

The OpenAM Authentication Service can be configured to lock a user’s account after a defined number of log in attempts has failed.  Account Lockout is disabled by default, but when configured properly, this feature can be useful in fending off brute force attacks against OpenAM login screens.

If your OpenAM environment includes an LDAP server (such as OpenDJ) as an authentication database, then you have options on how (and where) you can configure Account Lockout settings.  This can be performed in either OpenAM (as mentioned above) or in the LDAP server, itself.  But the behavior is different based on where this is configured.  There are benefits and drawbacks towards configuring Account Lockout in either product and knowing the difference is essential.

Note:  Configuring Account Lockout simultaneously in both products can lead to confusing results and should be avoided unless you have a firm understanding of how each product works.  See the scenario at the end of this article for a deeper dive on Account Lockout from an attribute perspective. 

The OpenAM Approach

You can configure Account Lockout in OpenAM either globally or for a particular realm.  To access the Account Lockout settings for the global configuration,

  1. Log in to OpenAM Console
  2. Navigate to:  Configuration > Authentication > Core
  3. Scroll down to Account Lockout section

To access Account Lockout settings for a particular realm,

  1. Log in to OpenAM Console
  2. Navigate to:  Access Control > realm > Authentication > All Core Settings
  3. Scroll down to Account Lockout section

In either location you will see various parameters for controlling Account Lockout as follows:


OpenAM Account Lockout Parameters

Configuring Account Lockout in OpenAM


Account Lockout is disabled by default; you need to select the “Login Failure Lockout Mode” checkbox to enable this feature.  Once it is enabled, you configure the number of attempts before an account is locked and even if a warning message is displayed to the user before their account is locked.  You can configure how long the account is locked and even the duration between successive lockouts (which can increase if you set the duration multiplier).  You can configure the attributes to use to store the account lockout information in addition to the default attributes configured in the Data Store.

Enabling Account Lockout affects the following Data Store attributes:  inetUserStatus and sunAMAuthInvalidAttemptsData.  By default, the value of the inetUserStatus attribute is either Active or Inactive, but this can be configured to use another attribute and another attribute value.  This can be configured in the User Configuration section of the Data Store configuration as follows:



Data Store Account Lockout Attributes


These attributes are updated in the Data Store configuration for the realm.  A benefit of implementing Account Lockout in OpenAM is that you can use any LDAPv3 directory, Active Directory, or even a relational database – but you do need to have a Data Store configured to provide OpenAM with somewhere to write these values.  An additional benefit is that OpenAM is already configured with error messages that can be easily displayed when a user’s account is about to be locked or has become locked.  Configuring Account Lockout within OpenAM, however, may not provide the level of granularity that you might need and as such, you may need to configure it in the authentication database (such as OpenDJ).

The OpenDJ Approach

OpenDJ can be configured to lock accounts as well.  This is defined in a password policy and can be configured globally (the entire OpenDJ instance) or it may be applied to a subentry (a group of users or a specific user).  Similar to OpenAM, a user’s account can be locked after a number of invalid authentication attempts have been made.  And similar to OpenAM, you have several additional settings that can be configured to control the lockout period, whether warnings should be sent, and even who to notify when the account has been locked.

But while configuring Account Lockout in OpenAM may recognize invalid password attempts in your SSO environment, configuring it in OpenDJ will recognize invalid attempts for any application that is using OpenDJ as an authentication database.  This is more of a centralized approach and can recognize attacks from several vectors.

Configuring Account Lockout in OpenDJ affects the following OpenDJ attributes:  pwdFailureTime (a multivalued attribute consisting of the timestamp of each invalid password attempt) and pwdAccountLockedTime (a timestamp indicating when the account was locked).

Another benefit of implementing Account Lockout in OpenDJ is the ability to configure Account Lockout for different types of users.  This is helpful when you want to have different password policies for users, administrators, or even service accounts.  This is accomplished by assigning different password polices directly to those users or indirectly through groups or virtual attributes.  A drawback to this approach, however, is that OpenAM doesn’t necessarily recognize the circumstances behind error messages returned from OpenDJ when a user is unable to log in.  A scrambled password in OpenDJ, for instance, simply displays as an Authentication failed error message in the OpenAM login screen.

By default, all users in OpenDJ are automatically assigned a generic (rather lenient) password policy that is aptly named:  Default Password Policy.  The definition of this policy can be seen as follows:


dn: cn=Default Password Policy,cn=Password Policies,cn=config
objectClass: ds-cfg-password-policy
objectClass: top
objectClass: ds-cfg-authentication-policy
ds-cfg-skip-validation-for-administrators: false
ds-cfg-force-change-on-add: false
ds-cfg-state-update-failure-policy: reactive
ds-cfg-password-history-count: 0
ds-cfg-password-history-duration: 0 seconds
ds-cfg-allow-multiple-password-values: false
ds-cfg-lockout-failure-expiration-interval: 0 seconds
ds-cfg-lockout-failure-count: 0
ds-cfg-max-password-reset-age: 0 seconds
ds-cfg-max-password-age: 0 seconds
ds-cfg-idle-lockout-interval: 0 seconds
ds-cfg-java-class: org.opends.server.core.PasswordPolicyFactory
ds-cfg-lockout-duration: 0 seconds
ds-cfg-grace-login-count: 0
ds-cfg-force-change-on-reset: false
ds-cfg-default-password-storage-scheme: cn=Salted SHA-1,cn=Password Storage 
ds-cfg-allow-user-password-changes: true
ds-cfg-allow-pre-encoded-passwords: false
ds-cfg-require-secure-password-changes: false
cn: Default Password Policy
ds-cfg-require-secure-authentication: false
ds-cfg-expire-passwords-without-warning: false
ds-cfg-password-change-requires-current-password: false
ds-cfg-password-generator: cn=Random Password Generator,cn=Password Generators,
ds-cfg-password-expiration-warning-interval: 5 days
ds-cfg-allow-expired-password-changes: false
ds-cfg-password-attribute: userPassword
ds-cfg-min-password-age: 0 seconds


The value of the ds-cfg-lockout-failure-count attribute is 0; which means that user accounts are not locked by default – no matter how many incorrect attempts are made.  This is one of the many security settings that you can configure in a password policy and while many of these mimic what is available in OpenAM, others go quite deeper.

You can use the OpenDJ dsconfig command to change the Default Password Policy as follows:


dsconfig set-password-policy-prop --policy-name "Default Password Policy" --set 
lockout-failure-count:3 --hostname localhost --port 4444 --trustAll --bindDN 
"cn=Directory Manager" --bindPassword ****** --no-prompt


Rather than modifying the Default Password Policy, a preferred method is to create a new password policy and apply your own specific settings to the new policy.  This policy can then be applied to a specific set of users.

The syntax for using the OpenDJ dsconfig command to create a new password policy can be seen below.


dsconfig create-password-policy --set default-password-storage-scheme:"Salted 
  SHA-1" --set password-attribute:userpassword --set lockout-failure-count:3 
  --type password-policy --policy-name "Example Corp User Password Policy" 
  --hostname localhost --port 4444 --trustAll --bindDN cn="Directory Manager" 
  --bindPassword ****** --no-prompt


Note:  This example contains a minimum number of settings (default-password-storage-scheme, password-attribute, and lockout-failure-count).  Consider adding additional settings to customize your password policy as desired.

You can now assign the password policy to an individual user by adding the following attribute as a subentry to the user’s object:


ds-pwp-password-policy-dn:  cn=Example Corp User Password Policy,cn=Password 
  Policies, cn=config


This can be performed using any LDAP client where you have write permissions to a user’s entry.  The following example uses the ldapmodify command in an interactive mode to perform this operation:


$ ldapmodify -D "cn=Directory Manager" -w ****** <ENTER>
dn: uid=bnelson,ou=People,dc=example,dc=com <ENTER>
changetype: modify <ENTER>
replace: ds-pwp-password-policy-dn <ENTER>
ds-pwp-password-policy-dn: cn=Example Corp User Password Policy,
  cn=Password Policies,cn=config <ENTER>


Another method of setting this password policy is through the use of a dynamically created virtual attribute (i.e. one that is not persisted in the OpenDJ database backend).  The following definition automatically assigns this new password policy to all users that exist beneath the ou=people container (the scope of the virtual attribute).


dn: cn=Example Corp User Password Policy Assignment,cn=Virtual 
objectClass: ds-cfg-virtual-attribute
objectClass: ds-cfg-user-defined-virtual-attribute
objectClass: top
ds-cfg-base-dn: ou=people,dc=example,dc=com
cn: Example Corp User Password Policy Assignment
ds-cfg-attribute-type: ds-pwp-password-policy-dn
ds-cfg-enabled: true
ds-cfg-filter: (objectclass=sbacperson)
ds-cfg-value: cn=Example Corp User Password Policy,cn=Password 


Note:  You can also use filters to create very granular results on how password polices are applied.

Configuring Account Lockout in OpenDJ has more flexibility and as such may be considered to be more powerful than OpenAM in this area.  The potential confusion, however, comes when attempting to unlock a user’s account when they have been locked out of both OpenAM and OpenDJ.  This is described in the following example.

A Deeper Dive into Account Lockout

Consider an environment where OpenAM is configured with the LDAP authentication module and that module has been configured to use an OpenDJ instance as the authentication database.



OpenDJ Configured as AuthN Database


OpenAM and OpenDJ have both been configured to lock a user’s account after 3 invalid password attempts.  What kind of behavior can you expect?  Let’s walk through each step of an Account Lockout process and observe the behavior on Account Lockout specific attributes.


Step 1:  Query Account Lockout Specific Attributes for the Test User


$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
inetuserstatus: Active


The user is currently active and Account Lockout specific attributes are empty.


Step 2:  Open the OpenAM Console and access the login screen for the realm where Account Lockout has been configured.



OpenAM Login Page


Step 3:  Enter an invalid password for this user



OpenAM Authentication Failure Message


Step 4:  Query Account Lockout Specific Attributes for the Test User


$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjE8L0
inetuserstatus: Active
pwdFailureTime: 20140422125819.918Z


You now see that there is a value for the pwdFailureTime.  This is the timestamp of when the first password failure occurred.  This attribute was populated by OpenDJ.

The sunAMAuthInvalidAttemptsData attribute is populated by OpenAM.  This is a base64 encoded value that contains valuable information regarding the invalid password attempt.  Run this through a base64 decoder and you will see that this attribute contains the following information:




Step 5:  Repeat Steps 2 and 3.  (This is the second password failure.)


Step 6:  Query Account Lockout Specific Attributes for the Test User


$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjI8L0
inetuserstatus: Active
pwdFailureTime: 20140422125819.918Z
pwdFailureTime: 20140422125913.151Z


There are now two values for the pwdFailureTime attribute – one for each password failure.  The sunAMAuthInvalidAttemptsData attribute has been updated as follows:




Step 7:  Repeat Steps 2 and 3.  (This is the third and final password failure.)



OpenAM Inactive User Page


OpenAM displays an error message indicating that the user’s account is not active.  This is OpenAM’s way of acknowledging that the user’s account has been locked.


Step 8:  Query Account Lockout Specific Attributes for the Test User


$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \ 
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjA8L0
inetuserstatus: Inactive
pwdFailureTime: 20140422125819.918Z
pwdFailureTime: 20140422125913.151Z
pwdFailureTime: 20140422125944.771Z
pwdAccountLockedTime: 20140422125944.771Z


There are now three values for the pwdFailureTime attribute – one for each password failure.  The sunAMAuthInvalidAttemptsData attribute has been updated as follows:




You will note that the counters have all been reset to zero.  That is because the user’s account has been inactivated by OpenAM by setting the value of the inetuserstatus attribute to Inactive.  Additionally, the third invalid password caused OpenDJ to lock the account by setting the value of the pwdAccountLockedTime attribute to the value of the last password failure.

Now that the account is locked out, how do you unlock it?  The natural thing for an OpenAM administrator to do is to reset the value of the inetuserstatus attribute and they would most likely use the OpenAM Console to do this as follows:



OpenAM Edit User Page (Change User Status)


The problem with this approach is that while the user’s status in OpenAM is now made active, the status in OpenDJ remains locked.


$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjA8L0
inetuserstatus: Active
pwdFailureTime: 20140422125819.918Z
pwdFailureTime: 20140422125913.151Z
pwdFailureTime: 20140422125944.771Z
pwdAccountLockedTime: 20140422125944.771Z


Attempting to log in to OpenAM with this user’s account yields an authentication error that would make most OpenAM administrators scratch their head; especially after just resetting the user’s status.



OpenAM Authentication Failure Message


The trick to fixing this is to clear the pwdAccountLockedTime and pwdFailureTime attributes and the way to do this is by modifying the user’s password.  Once again, the ldapmodify command can be used as follows:


$ ldapmodify -D "cn=Directory Manager" -w ****** <ENTER>
dn: uid=testuser1,ou=test,dc=example,dc=com <ENTER>
changetype: modify <ENTER>
replace: userPassword <ENTER>
userPassword: newpassword <ENTER>
$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=agcocorp,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjA8L0
inetuserstatus: Active
pwdChangedTime: 20140422172242.676Z


This, however, requires two different interfaces for managing the user’s account.  An easier method is to combine the changes into one interface.  You can modify the inetuserstatus attribute using ldapmodify or if you are using the OpenAM Console, simply change the password while you are updating the user’s status.



OpenAM Edit User Page (Change Password)


There are other ways to update one attribute by simply modifying the other.  This can range in complexity from a simple virtual attribute to a more complex yet powerful custom OpenDJ plugin.   But in the words of Voltaire, “With great power comes great responsibility.”

So go forth and wield your new found power; but do it in a responsible manner.

Julian BondDowning College-Cambridge, 1978. Special Consumers Association. Where are you now?  There was a white... [Technorati links]

April 22, 2014 05:28 PM
Downing College-Cambridge, 1978. Special Consumers Association. Where are you now?  There was a white Cortina Mk1 automatic with Jamaica flag roundels round the headlights driven by a guy who worked in the record biz. It expired just short of Harston. There was a vague connection with Regents University and very temporary digs behind Bond St station. Later there was a flat in West Hampstead. There was also a couple who lived in a flat in a big block on Steel's Rd (??? and Carrie). Some of you came to a post-wedding party in 1984 in Downshire Hill. There's a possible connection with Nigel Sharpe. I also kind of think you knew people from Queens and Cranbrook, Kent

Then there's Alex the guitar player who had a flat above Trinity St and played God Save out the window in the style of the Hendrix Star-Spangled. Later had a Guzzi Le Mans, lived in Crouch End before moving into a self build in Docklands. Alex's mate (who played Bass) and was into BASE jumping.

Mentioning Cranbrook. What happened to Ant? And the guy we always thought got recruited by the security services due to his fluent German, frequent trips to "Switzerland" and slightly dodgy dad. The others in that house on Chaucer Road. And what about Sue from Birmingham who went to Goa.

The three Dumont brothers, Max Bell and one or two others from Trinity. Especially the guy who looked after a flat in the posh bit between Shepherd's Bush and Hammersmith and a penchant for Tootsie's burgers. There were a lot of people I kind of knew at that afternoon garden party in Trinity Gardens '78. Bring white wine and we'll empty it into a dustbin and add fruit, mixers and spirits.

Did you have a house in Manchester and came to Rivington Pyke in '78 in a VW camper van with a dodgy gearbox? They wouldn't let us up the hill, so we ended up in a field. There was a guy there well known for making hot air balloons. He walked round with one on the end of a string above his head. The student house had a circular "door" hole into the sitting room and was mostly painted in purple.

Did you run a record and buttons stall in Camden Market, lived in a basement flat in Camden and briefly had a record shop in Hampstead? What about the artist who had a flat in S Villas at the back and was in the ambulance to Rivington and later to LA.

Annie, Sue, Droid, John Mac, Peter Walsh, Pete, Tim Palmer, Nick Froome, Dave Harris, Kevin Metcalfe and Steve Angel in the cutting room, and all the others hanging round Utopia studios.

Does any of this ring a bell? Get in touch.
[from: Google+ Posts]

KatasoftChoosing your Node.js Authentication Strategy [Technorati links]

April 22, 2014 03:25 PM

Node is blowing up! I’ve been working and playing with Node since 2010 and in that time I’ve seen it go from a tiny community of people hacking side projects to a full-fledged and legit movement of modern developers building very real, very important, and very large applications. A whole ecosystem of solutions has sprung up to help Node developers, and that ecosystem is rapidly evolving. But it’s increasingly hard to figure out what solutions are best for you because of all the noise in a Google search or in npm.

Authentication and user management in particular is a difficult and shifting landscape. And yet, when building a real application in Node, it is one of the first components you need to figure out. This guide aims to give you a complete lay of the land for user management and authentication in node.

What is available in Node?

Node today has several different paths to build user management. In no particular order there are:

Passport.js / Everyauth

PassportJS and Everyauth are authentication middleware for node that leverage the Connect middleware conventions. That means if you are using a framework like Express, Restify, or Sails you can easily plug one of their authentication schemes (or strategies) directly into your application. Everyauth comes with their strategies embedded, where as with Passport you can pick and choose which strategies to use. Some of the common strategies that developers use with Passport are Facebook and Google, but it also include everything from a local username/password authentication, to a slew of OpenID and OAuth providers, and even a Stormpath strategy for Passport.

Even though Everyauth and Passport are built on top of the same middleware framework, they have their own sets of pros and cons. Passport is more flexible and modular, but Everyauth provides additional functionality that helps with routes and login/registration views. For a lot of Node developers Passport is also preferred because it does not use promises.

Since Passport and Everyauth are built on Connect, both will help you with session management, including:

Additionally, they are designed for simple, easy authentication, but fall short (by design) of broader user management needs. You are still left to design, implement, and maintain all your other user infrastructure.

For example, if you are using passport-local (the strategy to authenticate username / password against your own database), Passport does not handle user signup and account verification. You will need to work with database modules to authenticate to a database, create the account, track verification status, create a verification token, send an email, and verify the account. This means a developer will need to worry about URL safety, removing expired tokens, and other security constraints (like hashing the password correctly in a database).

Your Own Database and a Hashing Algorithm

The do-it-yourself approach doesn’t rely on any middleware. You choose your own stack, a database to store users (likely PostgresSQL and MongoDB) and a hashing algorithm to generate password hashes (likely bcrypt and scrypt). Searching for bcrypt or scrypt in npm will result in quite a few modules of varying degrees of quality, each with their own sets of dependencies. Be particularly careful if you are a Windows developer – we recommend the native JS implementation of bcrypt.

After deciding on a stack, you will need to build out user management and authentication. Historically this approach has been extremely common, but is tedious, prone to error, and requires more maintenance than other approaches.

You will need to build/figure out:

One of the biggest challenges with rolling your own auth and user management, is the maintenance. Take password hashing as an example. Done right, you select a cost factor that makes your hashing algorithm purposely slow (roughly 300-700ms) to prevent brute force attacks. However, Moore’s Law is a bitch— compute price/performance doubles every year. So the right cost factor today, may be considered insecure tomorrow. If you’re building applications that will go into production, its your responsibility to update your hashing strategy at least once a year.

Despite the Node community’s aversion to “rolling your own” middleware, there are a few benefits to this approach. You get complete control of your infrastructure. If the tools available to you are not “good enough” for what you need to deliver to your customer, then you have the ability to invest engineering effort and time to innovate around user authentication and user management. Luckily, very few applications and developers have those kind of requirements, so the open source tools and API services available today help them move faster and deliver more.

User Management as a Service

Over time, software has moved from on premise, to the cloud, to distributed API services. At the same time, development teams have come to rely on open source software and API services as much as their own code. It is now possible to offload user management to a system that exposes user management functionality via REST APIs and open-source SDKs for application development. Stormpath falls into this category along with some user API startups. The Node community, in particular, has adopted this service oriented paradigm more than any other community to date.

Typically, API-driven services allow for more common functionality around user management, going beyond just authentication. The additional functionality varies by provider, but usually includes:

In addition to pure features, they also off-load much of the security and operations. They host all the infrastructure for you, scale to absorb your peak traffic, and handle on-going security of your user data.

Offloading to an API service generally delivers convenience and improved security, while reducing development and maintenance costs. Developers now get to focus on building unique parts of their application.

However, there are trade-offs with using API services. You are introducing a 3rd party dependency to your application. It needs to be highly available, fast, portable, provide security during transport, and be flexible enough to meet your user data model.

Availability and performance are extremely important because a critical system like user authentication and management can not be offline or slow. Caching in their SDKS and being based on modern cloud infrastructures is a good start, but it is important that the service is prepared for the worst case scenarios – like whole data centers going down. And if you build on top of a service for user management, make sure you can subscribe to notifications of any outages.

Data portability is also critical – can you move user data in and out in safe (and easy) ways. While its easy to write a script to port unencrypted data over JSON, the ease of porting passwords will heavily depend on how you have stored any pre-existing data. For example, bcrypt is very portable by design, as it follows Modular Crypt Format (MCF). It is possible for multiple systems and languages to understand how to construct the hash by looking at the hash value itself. If you’re building a prototype and don’t use a service like Stormpath, we recommend starting with an MCF hash like bcrypt – it will be much easier to upgrade in the future.

Transport security between your application and the API Service is important in this approach compared to the approaches above. The additional network communication needs to be secured. For example, Stormpath supports only HTTPS and uses a custom digest auth algorithm to guard against replay and man in the middle attacks. You can read more about our security here.

The data model used by authentication services can vary widely – Salesforce, for instance, looks very different from the Stormpath data model, which is based on directories and therefore generic and flexibly. It’s worth digging in to make sure that the data model you have planned for your application, is supported by the service. This is particularly true for mulit-tenant applications or SaaS.

Additionally, documentation of APIs varies widely – you should make sure your solution outlines what you need before you dive in.

Building User Management…

…is hard. Inside node there are different solutions, each with a set of pros and cons. I hope this post helps you get the lay of the land. If you have any questions, suggestions or experiences that you want to share, feel free to leave a comment or reach out to me on twitter @omgitstom. And if you’re interested in a User Management API service, check out Stormpath and our Official Node SDK.

KatasoftHow to secure PII (Personally Identifiable Information) [Technorati links]

April 22, 2014 03:01 PM

How to Store Personall Idenditifable Information?

What is PII

PII and Stormpath

How to Protect Your Users’ PII (Personally Identifiable Information)

This is major concern for many developers who are building applications that rely on personal information from their users— which mean nearly every developer out there. Privacy is becoming an ever increasing focus for users, companies, and governments/regulators. In turn, we often gets questions about how Stormpath fits in their equation— since we’re managing their user data. This article is meant to explain where Stormpath sits, what we do for PII, what we don’t do, and what a developer is expected to do themselves.

What is Personally Identifiable Information (PII)?

According to the U.S. government, PII is any information that can be used to uniquely identify, contact or locate an individual, or can be used with other sources to uniquely identify a person.

Basically, anything that can identify individuals, including their name, email address, phone numbers, addresses, credit cards, financial information, social security numbers, and driver’s license.

Worse, close to 90% of the U.S. population can be uniquely identified using only gender, date of birth and ZIP code. So it’s not just the most obvious types of PII, like credit card numbers, that require protection.

Why does it need protecting?

Why you should care about protecting PII in your application

Ok, I’m going to appeal to you on four levels— philospohically, empathatically, financially, and legally

  1. Because you have an implicit contract with your customers. You customers sign up for your application because they trust you. Right or wrong, they trust you to keep their data safe and only use it in the ways they expect you to. It is your duty to keep their data safe. Wouldn’t you want the same for your personal info on other apps?

    Philosphy not enough? Let’s try some empathy.
  2. Because it will really suck for your customers. Getting your personal information stole from some insecure site can be enough for someone to also steal your identity— signing up for credit cards, buying all kinds of stuff on your credit, getting drivers lisences and passports, and generally wreaking havoc that will be tied back to you. Sure, you can eventually clean up the mess but it’s terribly expensive and can take months or years of legal battles to clear your name and credit. Do you want to put your users through that?

    Still not moved? Heartless. Well, let’s try your piggybank.
  3. Because a PII breach can get expensive. Losing PII can be expensive to you. The data breach involving user data costs a company $5.5M. That includes expenses like alerting users, handling the PR nightmare, dealing with the legal mess when those pesky users and their lawyers find out, investingating what actually happened and plugging the hole, etc. But there’s cost in that number that every software developers should lose sleep over, churn. On a data breach, you can expect up to 10% of your users to leave, immediately, forever. Also know as churn, 10% in an instant is a crippling blow to most SaaS businesses we are usually dealing with 4-5% annually.
  4. Because for some of you, it’s the f—ing law. ‘nough said. For users from some state (North Dakota and California) or users from some Countries (most of Europe), you’re on legal hook for protecting their data and taking certain steps if their data is ever compromised. And the list of places with privacy laws that affect your software is only increasing. Let me be clear, this is not about where you or your application live, its about where your users live.

How to protect PII

“Ok, I get it. I need to secure PII.” Great! Let’s walk through it. As many of you know, data exists in three primary states. Data in Use, Data at Rest, and Data in Transit. You will need to think through security for your PII at all three states. For many, Stormpath’s User API handles a lot of the security burden around PII and it’s free to get started.

Data in Use

In this state the data is being worked with in your application. Maybe its being displayed or editted in your UI, attached to an active session, or being analyzed.

Are you using doing XYZ? ABC?

Data at Rest

Basically, the data is sitting a database or file somewhere.

Data in Transit

It’s flying across the network at the speed of light— public internet or private network.

Other points you can’t search what you can’t read. careful what you decide to encrypt.
if you don’t need to search against it, think hard about whether you really need the data and why. You may want to minimize your PII footprint Access Policy is critical- define one System level security is crucial Network level security is crucial

How Stormpath protects your PII

Strong network level security Strong systems level security Strong encryption in transit w/ signatures Strong authentication of APIs Encrypted backups Tight access policies to production data and systems Wide and deep monitoring of our systems Secure software development processes Rapid response to threats, vulnerabilities, and patches

AES-256 is your best bet, but you must configure the cipher correctly: you must use (at a minimum) a secure random Initialization Vector and specify the ‘CBC’ block cipher mode. Most encryption libraries default block ciphers to EBC mode, which doesn’t need an IV, but it is not secure (I can explain why if you want the next time we hang out :). AES-256 with a secure random IV and CBC mode is a minimum.

The IV requires additional burden on you, the programmer, to ensure you save it with the encrypted output (aka ‘ciphertext’). Finally, NEVER EVER store your encryption key in the same database as where you store encrypted output – store encryption keys in an external (operating-system projected) file or in Chef or Puppet, or whatever.

For example, some pseudo-code (quick and dirty, and your programming language will vary, but hopefully it conveys the steps).


String rawPassword = getFromUser();

byte[] encryptionKey = getFromYourAppConfig(); //SUPER PRIVATE – NEVER EVER STORE this in same DB as the encrypted data – store in a file or in Chef, or Puppet, etc

byte[] plaintextBytes = iWantToEncrypt.getBytes(“UTF-8”); //really important to get UTF-8 bytes, and not platform-specific charset bytes

Cipher cipher = new Aes256Cipher(CipherMode.CBC);

byte[] initializationVector = new byte[16]; //AES required IVs to be 16 bytes long

secureRandomNumberGenerator.fill(initializationVector); //fill w/ random data

byte[] ciphertext = cipher.encrypt(plaintextBytes, encryptionKey, initializationVector);

byte[] concatenated = initializationVector + cipherText; //append the cipherText bytes to the IV bytes

String base64 = Base64.encode(concatenated);



byte[] encryptionKey = getFromYourAppConfig(); //SUPER PRIVATE – NEVER EVER STORE this in same DB as the encrypted data – store in a file or in Chef, or Puppet, etc

String base64 = getFromDatabase();

byte[] totalData = Base64.decode(base64);

byte[] initializationVector = totalData.subArray(0, 16); //strip off the first 16 bytes

byte[] ciphertextBytes = totalData.subArray(16, totalData.length()); //strip off remaining bytes

Cipher cipher = new Aes256Cipher(CipherMode.CBC);

byte[] plaintextBytes = cipher.decrypt(ciphertextBytes, encryptionKey, initializationVector);

String rawPassword = new String(plaintextbytes, “UTF-8”); //encode bytes to a UTF-8 string and NOT via platform-specific string encoding.

//use rawPassword to contact the utility company


Vittorio Bertocci - MicrosoftAuthentication Protocols, Web UX and Web API [Technorati links]

April 22, 2014 02:50 PM

The back to basics post about token validation published few weeks ago was overwhelmingly well received – hence, always the data driven kind – here I am jolting down the logical next step: an overview of authentication protocols.

You heard the names: SAML, OAuth 1 and 2, WS-Federation, Kerberos, WS-Trust, OpenID and OpenID Connect, and various others. You probably already have a good intuitive grasp of what those are and what they are for, but more often than not you don’t know how those really work. Usually you don’t need to, which is why you can happily skip this post and still be successful taking advantage of those in your solutions. However, as stated in the former back to basics post, sometime it is useful to pry Maya’s veil open and get a feeling of the prosaic, unglamorous inner workings of those technologies. It might rob you of some of your innocence, but that knowledge will also come immensely handy if you find yourself having to troubleshoot some stubborn auth issue.

Also, a word of caution here. This is one of my proverbial posts written during an intercontinental flight. Nobody is paying me to write it, I am doing it mostly for passing the time, hence it might end up being more verbose and ornate than some of you might like. Will all due respect, this is my personal blog, I am writing in my personal time off, this page is being served to your browser from bandwidth I pay out of pocket. If you don’t like verbose and ornate, well… Winking smile


In a nutshell: an authentication protocol is a choreography of sort where a protected resource, a requestor and possibly an identity provider exchange messages of a well defined format, in a well defined sequence – to the purpose of allowing a qualified requestor to access the protected resource.

The type of resource, and the mode with which the requestor attempts to access it, are the main factor influencing which shape and sequence of the messages are suitable for addressing each scenario.

The canonical taxonomy defines two broad protocol categories: passive protocols (meant to be used by web apps which render their UX through a browser) and active protocols (meant to address scenarios in which a resource is consumed by native apps, or apps with no UX at all).

Today’s app landscape has to some degree outgrown that taxonomy, as the two categories exchanged genetic material and traits from one contaminated the other: HML5 and JavaScript can make browsers behave like native apps, and native apps can occasionally whip out a windowed browser for a quick dip in passive flows. Despite of those developments, like a Newtonian physics of sort the canonical taxonomy remains of interest – both for its didactic potential (it’s a great introduction) and for its concrete predictive power (for now, most apps are still accurately described by it). Hence, in the remainder of this post I am going to unfold its details for you – and reserve the discussion of finer points to some other future posts.

Web UX and Passive Protocols

Open a browser in private mode, and navigate to the Microsoft Azure portal. You’ll be bounced to authenticate against AAD, either via its cloud based credential gathering pages or via your ADFS if your tenant is federated. Upon successful authentication, you’ll finally see the Azure portal take shape in your browser. What you just experienced is an example of a passive authentication flow.


Passive protocols are designed to protect applications that are consumed via web browser; web “pages”, if you will. The protocol takes its “passive” modifier from the nature of its main mean of interaction, the web browser.
Truly, the web browser has no “will” of its own. Think about it: you kickstarted the process by typing a web address and hitting enter, and what followed was 100% determined by what the server replied to that initial GET. The browser passively (get it? Winking smile) interpreted whatever HTTP codes came back, doing the server’s bidding with (nearly, see below) no contribution of its own.

SAML-P, classic OpenID and OpenID Connect and WS-Federation are all examples of passive protocols, conceived to operate by leveraging the aforementioned browser behavior. They emerged at different historical moments for addressing different variations of the common “web pages app” theme, all employ wildly different syntaxes, but if you squint (sometimes a little, sometimes a lot) they all roughly conform to the same high level patterns.

Most passive protocols are concerned about two main tasks:

Note: for the sake of simplicity, today I’ll ignore session management considerations like sign out and its various degrees/stages.

Let’s consider the two tasks in reverse order.

Sign in

Put yourself in the “shoes” of the Azure portal during the flow described earlier. You are sitting there all nice and merry, and suddenly from the thick fog one hand emerges, handing you a post-it that says something to the effect of “GET /”. You don’t see who’s handing you the note, and there’s nothing else you can use to infer the identity of the requestor. You need to know, because 1) you’ll serve only requests from legitimate customers and 2) the content of the HTML you’d return for “/” (services, sites, etc) depends on the caller’s identity!
You have two possible course of action here. You could simply state that the caller is unauthorized by sending back a 401, but that would not get the user very far: the browser would render the content, presumably a scary looking error, and there would be no recover from it.
The other possibility is what is typically done by passive protocols: instead of simply asserting that the caller is unauthenticated, you can proactively start the process of making the necessary authentication take place. In the Azure portal case, we know that we need users to authenticate with Azure AD; hence, we can send off the caller to Azure AD with a note detailing what we want Azure AD to do – authenticate the caller and send it back to us once that’s done.

That is the first protocol artifact we encounter: the sign in request. Every protocol flavor has its own message format and transport technique, but the main common traits are:

Some protocols can omit the return address because the IP might already know what addresses should be used for each application it knows about, and the app involved in the request is unambiguously indicated by its identifier.
In fact, for the above to work the IP usually requires to have pre-existing knowledge of the app requesting the sign in. This stems from both security reasons (e.g. you don’t want your users to authenticate with apps you don’t know about, as this might lead to unintended disclosure of info) and practical ones (e.g. the IP needs to know what info are required for all the apps it knows about).

Want to see a couple of examples? Here they are. The constructs in the messages follow the same color coding as the abstract concepts they implement as introduced above.

Here there’s a sign in message in OpenId Connect:

HTTP/1.1 302 Found
Location: https://login.windows.net/9b94b3a8-54a6-412b-b86e-3808cb997309/oauth2/authorize?client_id=f8df0782-523e-4dab-a3d8-1b381c601fa5&nonce=78fth1RPnINMRGiQY2Tjwbmnz2rBJQW5tneXiOCk5g0&

You can see it contains the endpoint of the IP, the identifier of the app (the client_id) and an indication of the type of transaction we want to perform. This specific example does not feature a return address, but that’s simply a function of what I could fish out from what Fiddler traces I have on my local disk (as I write those words we are flying over Iceland).

Here there’s a WS-Federation sign in message.

HTTP/1.1 302 Found
Location: https://login.windows.net/9b94b3a8-54a6-412b-b86e-3808cb997309/wsFederation?wa=wsignin1.0&

Here we see all elements.

On a side note: You have to marvel at the power of evolution Smile here there are two different protocols, born in different times from (mostly) different people and yet, the “body plan” is remarkably similar. But I digress.

What happens next – as in what HTML is served back once the browser honors the 302 and hits the specified IP endpoint with the message – is up to the IP. In the Azure AD case, there’s sophisticated logic determining whether the user should enter credentials right away or be redirected to a local ADFS, whether the look & feel of the page should be generic or personalized for a specifc tenant, whether the user gets away with just typing username and password or if a 2nd authentication factor is required… and this is just the behavior of one provider, in this point in time. Protocols typically do not enter in the details of what an IP should do to perform authentication. However, they do prescribe how the outcome of a successful authentication operation should be represented and communicated back to the app.

From the other back to basics post you are already familiar with the use of security tokens for representing a successful authentication. It is not the only method, but we’ll focus on it today.
Different protocols mandate different requirements on the types of tokens they admit: SAML only works with SAML tokens, OpenID Connect only uses JWTs as id_tokens, WS-Federation works with any format as long as it can reference it, and so on.

Another big difference between protocols lies in the method used to send the tokens back to the application. We’ll take a look at that next.

Request Validation

The principal aspects of this phase regulated by each protocols’ specs are what kind of tokens are sent, where in the message they are placed, which supporting info are required and which format such a message should generally assume.
The outcome of a successful sign in operation with the IP is a message that travels back to the application, carrying some proof that the authentication took place (e.g. the token) and some supporting material that the app can use to validate it.

That can be accomplished in a number of way: once again with a a plain 302, or with some more elaborate mechanism such as a page with some javascript designed to auto-post to the app a form containing the token. WS-Federation does the latter; OpenID Connect offers to the app a choice between numerous methods, including the autopost form. Here there’s an example:

HTTP/1.1 200 OK
Content-Length: 2334

<form method="POST" name="hiddenform"
  <input type="hidden" name="code" value="AwABAA[.snip.]USF2uByAA" />
  <input type="hidden" name="id_token" value="eyJ0eXA[.snip.]-odog" />
  <input type="hidden" name="state" value="OpenIdConnect.AuthenticationProperties=xDn9ksWT
[.snip.]sPUxKFJL7Q" />
  <input type="hidden" name="session_state" value="1dc37d42-cc43-4d82-93d5-521feb8fb27e" />
    <p>Script is disabled. Click Submit to continue.</p>
    <input type="submit" value="Submit" />
<script language="javascript">
window.setTimeout('document.forms[0].submit()', 0);


As you can see, the content is a form with the requested token and some javascript to POST it back to the application. The app is responsible for accepting the form, locating the token in it and validating it as discussed here. A valid token is what the app was after when it triggered all this brouhaha, hence getting it basically calls for “mission accomplished”  (but don’t go home yet, read the next section first).

Bonus Track: Application Session

For what protocols are usually concerned, delivering the token to the app concluded the transaction. However, as you might imagine dancing that dance for every single GET your app UX requires would be jolly impractical. That’s why in order to understand how the authentication flow actually works we need to go a bit beyond the de jure aspect of the protocols and and look at the de facto of the common implementations.

Upon successful validation of an incoming token via passive protocol, it is typical for a web app to emit a session cookie – which testifies that the validation successfully took place, without having to repeat it (along with token acquisition) at every subsequent request. in fact, such a session cookie will be included by the browser at every subsequent request to the app’s domain: at that point, the app just needs to verify that the cookie is present and is valid. As there is no standard describing how such cookies should look like, it is up to the app (or better, to the protocol stack the app relies on) to define what “valid” means. It is customary to protect the cookie against tampering by signing and encrypting it, to assign to it an expiration instant (often derived from the expiration of the token it has been derived from) and so on. As long as the cookie is present and valid, a session with the app is in place. One implication of this is that such cookie needs to be disposed of when a user (or other means) triggers a sign out. Also note, this cookie is separate and distinct from whatever cookie the IP might have itself produced for its own domain after authenticating the user – but the two are not independent. For example: if a sign out operation would clear only the app cookie but leave the IP cookie undisturbed it would not really disconnect the user, given that the first request to the app would be redirected to the IP and the presence o the IP cookie would automatically sign back in the user without requiring any credential entering. But as I mentioned earlier, I don’t want to dig in sign out today hence I won’t go in the details.

This would also be a good place for starting a digression on how cookies aren’t well apt to be used with AJAX calls, and how modern scenarios like SPA apps might be better off tracking sessions by directly saving tokens and attaching them to requests, but this post is already fantastically long (now we are flying over Greenland, almost cleared it in fact) hence we’ll postpone to another day.


Well, if you managed to read all the way to here – congratulations! Now you know more about how passive protocols work, and why they deserve their moniker. Let’s shift Sauron’s eye to their (mostly) nimbler siblings, the active protocols.

Web API and Active Protocols

Whereas a classic web app packs in a single bundle both the logic for handling the user experience and the server side processing, using the browser as a puppet for enacting its narrative, in the case of a web API (and all web services in general) the separation of concerns is more crisp. The consumer of the web API – being it a native app, the code-behind of a server side process or whatever else – acts according to its own logic and presents its own UX when and if necessary: the act of calling the web API is triggered by its own code, rather than as a reaction to a server directive. The web API serves back a representation of the requested resource, and what the requesting app does with it (processed? Visualized? Stored?) is really none of the web API’s business.


This radically different consumption model influenced the way in which authentication considerations latch to it. For starters, every request to a resource is modeled as an independent event, rather than part of a sequence: no cookies here. If a requestor does not present the necessary authentication material along with a request, it will simply get a 401: there won’t be automatic redirects to the IP because that’s simply incompatible with the model. If your code is performing a GET for a resource, it expects a representation of that resource or an error: getting a 302 would make no sense, as there’s no generic HTTP/HTML processor to act on it.

A requestor is expected to have its own logic for acquiring the necessary authentication material (e.g. a token) before it can perform a successful request to a protected web API. The task of acquiring tokens and the task of using them to access resources are regulated by pairwise-related – but nonetheless different & distinct – protocols. In the WS-* world, token acquisition was regulated by WS-Trust and resource access was secured according to WS-Security. In today’s RESTful landscape, token acquisition is mostly done through one of the many OAuth2 grants while resource access is done according to OAuth2 bearer token usage spec and its semi-proprietary variants.

Today I’ll gloss over the token acquisition part (after all, you can always rely on ADAL to do the heavy lifting for you Smile) and concentrate on securing access to resources, mostly to highlight the differences with the passive flows.
It is tempting to deconstruct active calls as a passive flow where the token acquisition took place out of band and there’s no session establishment – every call needs to pack the original token. That is in the right direction but not quite correct: the further difference is that whereas in passive flows the token is sent via a dedicated message with explicit sign in semantic, active flows are designed to augment resource requests with tokens.

In term of syntax, I am sure you are familiar with how OAuth2 adds tokens to resource requests: Authorization header and query parameter are the most common token vessels in my experience. WS-Security is substantially more complicated, and also occasionally violates what I wrote earlier about the lack of session, hence I will conveniently avoid giving examples of it Smile

Libraries and Middleware

At this point you have a concrete idea of how passive and active protocols work, which is what I was going for with this post. Before I let the topic go, though, I’d like to leverage the fact that you have the topic fresh in your mind to highlight some interesting facts about how you go about using those protocols when developing and connect some dots for you.

First of all: in most of the post I referred to the app as the entity responsible for triggering sign ins, verifying tokens, dropping cookies and the like. Although that is certainly a possible approach, we all know that it’s not how it goes down in practice. Protocol enforcement has to take place before your app logic has a chance of kicking in, and it constitutes of a nicely self-contained chuck of boilerplate logic – hence it is a perfect candidate for being externalized in the infrastructure, outside of your application. Besides: you don’t want to have security-dense code every single time you need to protect a resource, and have it intertwined with your app code.
There are multiple places where the authentication protocol logic can live: the hosting layer, the web server layer and the programming framework are all approaches in common use today.

Stacks for Passive Protocols

Examples of passive protocols implementations on the .NET platform are the claims support in .NET 4.5.x (the classes coming from Windows Identity Foundation) and the OWIN security components in ASP.NET. Check out the summary table below:


Both are technologies designed to live in front of your app and perform their function before your code is invoked. Both require you to specify the coordinates that – now you know – are necessary for driving the protocol’s exchanges: the IP you want to work with, the identifier of your app in the transaction, occasionally the URL at which tokens should be returned, token validation coordinates, and so on.

The main differences between the two stacks are a function of when the two libraries were conceived. WIF came out at a time in which identity protocols were still the domain of administrators, hence the way in which you specify the coordinates the protocol needs is verbose and fully contained in the web.config (to make it possible to change it without touching the code proper). You can scan the web.config of an app configured to use WIF and find explicit entries for literally all the color coded coordinates discussed above. Also, at the time everything ran on IIS and System.Web – hence, the technology for intercepting requests was fully based on the associated extensibility model.

The new OWIN security components for ASP.NET have been conceived in a world where IPs routinely publish machine readable documents (“metadata documents”) listing many of the coordinates required for driving protocol transactions – hence configuring one app can be greatly simplified by referring to such documents rather than listing each coordinate explicitly. As the claims based identity protocols became mainstream for developers, the web.config centric model ceased to be necessary. Finally, today’s high density services and in general a desire for improved portability and host-independence led to the use of OWIN, a processing pipeline that can be targeted without requiring any assumption about where an app is ultimately hosted.

Despite of all those differences, the goals and overall structure of the two stacks remain the same, implementing a passive sign in protocol: intercepting unauthenticated requests, issuing sign in challenges toward the intended IP, retrieving tokens from messages and validating – all this while requiring minimally invasive configuration and impact on your own app.

Stacks for Active Protocols

If you exclude the token acquisition part, as we did in this post, implementing active protocols is easier. Or I should say, the protocol part itself is easy: it boils down to locating the token in the request, determining its type for the protocols allowing for more than one, and feed it to your validation logic.
The token validation can be arbitrarily complex: for example, proof of possession tokens in formats like SAML, which require canonicalization, is not something you’d want to implement from scratch. However, those scenarios are less and less common – today the rising currency is bitco… ops, bearer tokens, which are usually dramatically easier to handle. Let’s focus on those.

The OAuth2 bearer token usage spec does not mandate any specific token format – that said, the most commonly used format for scenarios where boundaries are crossed (see here) is JWT. That also happens to be the format used in OpenID Connect and pretty much everywhere in Azure AD.

Say that you want to secure a web API as described by the OAuth2 bearer token usage spec, and that you expect JWT as the token format. What are your implementation options today on the .NET stack? Mainly two:


Getting from Paris to Seattle takes quite a long time, and the post followed suit. If you had the perseverance of reading all the way to here, congratulations! You now have a better idea of how passive and active protocols works, and how today’s technology implements those. This should empower you to grasp all sorts of new insights, such as why it is not straightforward to secure both Web API and MVC UX within the same project: when you receive an unauthenticated GET of a given resource, how can you decide if you should treat that as a passive request (“build that sign in message! Return a 302!”) or an active one (“What, no token? So no access to this web API, you fool! 401!”). On this specific problem, the good news is that the OWIN security components for ASP.NET make it pretty easy to handle that exact scenario: and now that you grok all this, I’ll tell you how to do it in the next post Smile

Kuppinger ColeExecutive View: RSA Aveksa Identity Management & Governance - 70873 [Technorati links]

April 22, 2014 06:55 AM
In KuppingerCole

Access Governance is about the governance and management of access controls in IT systems and thus about mitigating access-related risks. These risks include the stealing of information, fraud through changing information, and the subverting of IT systems, for example in banking, to facilitate illegal actions, to name just a few. The large number of prominent incidents within the last few years proves the need to address these issues – in any...

Kuppinger ColeExecutive View: Oracle Enterprise Single Sign-On Suite Plus - 71024 [Technorati links]

April 22, 2014 06:30 AM
In KuppingerCole

Enterprise Single Sign-On (E-SSO) is a well-established technology. Despite all progress in the area of Identity Federation, E-SSO is also still a relevant technology. This is also true in the light of the growing number of Cloud-SSO solutions that manage access to cloud applications, both as on-premise and cloud-based approaches but also as targeted Single Sign-On to Cloud apps. However, in most organizations there are still in use many legacy...

April 21, 2014

Ping Talk - Ping IdentityPing open sources OpenID Connect module for Apache Web server [Technorati links]

April 21, 2014 08:41 PM
<p><span style="line-height: 1.62;">Historically, Ping Identity has been extremely active in the identity community be it standards bodies, conferences or open source.</span></p> <p>Back in 2002, Ping co-founder and original CTO Bryan Field-Elliott (now chief architect for On-Demand Services) grew an internal federated identity mission into an open source project. Former Ping marketing maven Eric Norlin dubbed it SourceID and released it into "free" - as in speech, not beer - and it became <span style="line-height: 1.62;">the go-to community for open source federated identity projects for both identity geeks and corporations.</span></p> <p>Now, Ping is continuing that pioneer spirit with an open source project on GitHub known as <a href="https://github.com/pingidentity/mod_auth_openidc/blob/master/README"><i>mod_auth_openidc</i></a>, an authentication/authorization module for the Apache 2.x HTTP server.</p> <p>The project enables an Apache web server to operate as an OpenID Connect Relying Party using the OpenID Connect Basic Client or Implicit Client profile. <span style="line-height: 1.62;">(Follow this link to get your geek on with </span><a href="https://github.com/pingidentity/mod_auth_openidc/blob/master/auth_openidc.conf" style="line-height: 1.62;">a full-set of configuration options</a><span style="line-height: 1.62;">).</span></p> <p>But in plain English that means it&aposs a groundbreaking project that fills a gap in scaling federation to handle the authentication onslaught from mobile devices, APIs to the Internet of Things.</p> <p>In short, there is a real possibility that all future Web SSO will be based on OpenID Connect and scale up to untold numbers of users, and Apache is surely to be a key node in guaranteeing that model works. <a href="https://www.pingidentity.com/blogs/cto-blog/index.cfm/2014/02/now-this-morning-openid-connect-became-real.cfm">The recently finalized OpenID Connect specification</a> is currently supported by Google, AOL, Salesforce.com, Ping Identity and many others.</p> <p>In addition, developers are emerging as an important cog in the identity infrastructure by relying on established ID services, whether behind the firewall or on the Web, instead of crafting built-in identity schemes of their own.</p> <p>The Relying Party (RP) side of federation has always been the weak link in the chain, now inserting that foundational necessity for identity federation should get easier with the Apache module.</p> <p>The important development here is connecting OpenID Connect to the Apache platform, the Internet&aposs most popular web server platform according to NetCraft market research. And open source is the proper way to bring tools to the Apache community.</p> <p>Open source has picked up some steam in the news lately driven by a number of vendors. Ping Identity partner <a href="http://thenextweb.com/insider/2014/04/11/box-announces-open-source-initiative-give-back-community/">Box said recently it was re-connecting with open source.</a> Gluu is working on <a href="https://www.gluu.org/blog/uma-and-openid-connect-plugins-for-apache/">OpenID Connect Plugins for Apache that incorporate the User Managed Access specification and focus on authorization</a>. ForgeRock forged a relationship with Salesforce on the back of OSS.  And <a href="http://www.wired.com/2014/04/nasa-guidebook/">NASA just open sourced some of the toys in its software vault</a>.</p> <p>And yes, Ping is eating its own 1s and 0s on this one, allowing its <i>mod_auth_openidc</i> project to operate as an OAuth 2.0 Resource Server to a PingFederate OAuth 2.0 Authorization Server, validating Bearer access tokens against PingFederate.</p> <p>But there are no dependencies here - it is pure open source.</p> <p>Just one disclaimer. This software is open sourced by Ping Identity but not supported commercially as such. See the Github Issues section for this project or contact project author Hans Zandbelt directly at hzandbelt at pingidentity dot com.</p> <p>The project is available <a href="https://github.com/pingidentity/mod_auth_openidc">here</a>. </p>

GluuFIDO Authentication – Down boy! [Technorati links]

April 21, 2014 08:27 PM


Granted, no one is saying FIDO is the silver bullet for two-factor authentication. However, I get the impression that many people (and many senior executives in particular…) are thinking or at least hoping that maybe it is.  With the goal of putting FIDO into the proper perspective, this blog will go over some of the things Gluu loves about FIDO, and some of its limitations.

One of the most exciting potentials for FIDO is the opportunity to bring down the cost of USB-based and bio-metric authentication. Right now, USB authentication tokens are just too expensive. With the FIDO U2F standards, more manufacturers producing more creative form-factors for tokens is a win-win for consumers–fun-to-use devices at a lower price. Likewise, FIDO also promises to standardize biometric authentication through its UAF standards–why should each biometric vendor have a proprietary interface? For example, where are the API’s for fingerprint authentication for the new Samsung Galaxy S5?

In the near term, where will you be able to use a FIDO authentication mechanism? A plugin for Chrome and Firefox is in the works, but the roadmap for Android support of FIDO is unclear.  Also, the U2F standard is USB protocol hardwired. On a phone without a USB interface, its useless. Currently, the FIDO website FAQ says:

While there are no current product announcements, we anticipate multiple FIDO options available on Android phones this year and development with Windows tablet and Apple products shortly thereafter. Check in with us in a few months.

Of course, USB and biometric authentication are just two ways you could identify a person. There are other cognitive, behavioral, PKI, and NFC mechanisms that also make sense in many situations. In my SXSW prezi I present over thirty different mechanisms to identify a person. And this number probably represents less than 10% of the technologies available to organizations. For many organizations, FIDO will be just one of many authentication mechanisms that are supported. In fact, using FIDO only makes sense if the person has a certain kind of device in their hands.

One aspect of website authentication that many organizations don’t understand is that identifying the person is just one step in the workflow for authentication. Even assuming a positive authentication is made, many websites consider a number of contextual clues to assist in fraud detection. Is the geo-location of the IP address in a foreign country? Are there hooks to intrusion detection, central logging, or threat information sharing services? Authentication is a critical event frequently with special business logic required by the organization. The mechanism of authentication–which may be FIDO–is just one part of that business logic.

Recently, there was a Microsoft study that surveyed many different types of authentication. What it found is that while some mechanisms were more secure, and some more usable, the main impediment to the adoption of two-factor were deployment challenges. Will FIDO improve deployability of 2FA? Not significantly. Will it make USB and biometric authentication cheaper? YES!

One of the conundrums we are facing is why is FIDO getting all the attention, while OpenID Connect’s delivery of a final specification has been for the most part un-noticed. At first glance, federation protocols like “Connect” don’t seem to provide a solution for two-factor authentication. In fact, Connect is neutral on how the person is authenticated.

However, here at Gluu we see everything around authentication as having the greatest impact on deploy-ability. In order to get developers of mobile applications and websites to support strong authentication, we need a lingua franca. App developers want to be able to select FIDO authentication, or phone authentication, or even password authentication–depending on the situation. Tightly bundling an application to a low-level authentication protocol like FIDO would be just as bad as sending the username / password SELECT statement to the database. OpenID Connect provides a higher level interface between the application and the domain that is authenticating the person.

In order to reduce the deployment challenge of strong authentication, we need to make it brain dead easy for developers who are writing custom applications, and for system administrators who are configuring off-the-shelf enterprise software and SaaS services. Don’t get me wrong, FIDO is great. But it is possible that the FIDO standards will primarily be used in an organization’s federation platform–not directly in its applications.

 One last thought: strong authentication is only a partial success. The trick is… knowing when to elevate from a less secure type of authentication, or when to “step-up” authentication. To specify the authentication policies for API’s , an authorization protocol like the User Managed Access protocol is needed. If we design only for strong authentication and SSO, we will be setting the bar too low. 

Julian BondWhat a curious character. Worth investigating. [Technorati links]

April 21, 2014 06:10 PM
What a curious character. Worth investigating.
 Paul Kingsnorth »
Welcome to my website. In these dusty e-stacks I store essays, books, poetry, strange maps, scrappy jottings, diaries and other yellowing papers. Enjoy rummaging.

[from: Google+ Posts]

CourionWhat Drives the Need for Identity Management in Manufacturing? [Technorati links]

April 21, 2014 12:05 PM

Access Risk Management Blog | Courion

Jim SperedelozziWith Target and Neiman Marcus and other recent breaches, ‘retail’ is the flavor of the month among identity and access management solution providers. Similarly, HIPAA and other privacy regulations keep the healthcare folks top of mind for us identity specialists, and the thousands of regulations driving the need for financial services firms to invest in access governance and provisioning solutions keep these organizations front and center for IAM providers.

But why aren’t we talking more about manufacturing?  In fact, almost 20% of Courion customers are manufacturers. What is it about manufacturing that drives the need for user provisioning, access certification and password management capabilities?manufacturing

It’s a vast industry – larger than healthcare, financial services and retail combined. The size and scope of manufacturing as an industry dictates that nearly every software category is used by manufacturers. I’ve read (here) that manufacturing in the U.S. would be the 8th largest economy in the world. So that’s a large addressable market. But these companies are also typically older and more conservative firms that do not like to waste money on unnecessary projects, so there is more to the story.

Like every other industry, manufacturers will invest in IAM to achieve one of three goals:

Courion’s manufacturing customers are concerned with all three of these issues. From an efficiency standpoint, with thin margins and overseas competition, anything that creates efficiency is worth considering. By automating provisioning or providing self-service password management and access request and approvals, manufacturers become more efficient in three ways:

As far as compliance is concerned, Sarbanes-Oxley (SOX), or the “American Competitiveness and Corporate Accountability Act of 2002” as this regulation is officially called, has had a huge effect on manufacturing firms. It is the most commonly cited regulatory concern for manufacturing CIOs that we speak with.

SOX has a number of provisions around information security, specifically in terms of who can access financial data, and segregation of duties (SOD) policy. Many CIOs and CISOs have interpreted these provisions to mean a need for access certification, especially around systems involving financial data. More tactically, password management solutions can make it easier to enforce password reset policies that many interpret SOX to require. Many of Courion’s manufacturing customers deploy password management first because of this.

While the SEC encourages organizations to review controls based on risk, compliance in and of itself does not necessarily reduce risk. In fact, most data breaches involve organizations that are compliant with the regulatory regimes they are expected to adhere to.

For manufacturing firms, managing risk is not the top priority when it comes to IT access and identity management. However, for those with significant intellectual property concerns – managing access risk is paramount to survival. Consider the case of Nortel, now defunct, where hackers had widespread access to computer systems for over 10 years. No technology product manufacturer can compete when their unreleased product plans become available to their competition or the highest bidder. Similarly, look at American Superconductor, where source code was stolen and sold to their largest customer for pennies on the dollar.

Frankly, manufacturers should take a closer look at their risk profile as it relates to access by starting with a discussion around what assets are critical to protect, such as product designs, future plans, or manufacturing processes. They should include the expected impact of a breach exposing these assets and from there, consider the likelihood that a breach event would occur. The likelihood and impact together paints the true risk picture.

While “impact” can only be derived from internal discussions and analysis, Courion customers’ leverage Access Insight, an identity and access intelligence or analytics solution, to gain a view into the likelihood of whether legitimate access could be used to steal a manufacturer’s “crown jewels”.

Access Insight has been used by IT security executives across many industries to justify IAM investments, as it quickly exposes the reality that no matter how “compliant” an organization may appear to be – there may be gaping holes in  risk posture that must be remediated.

Whether your firm is looking to become more efficient, ensure compliance, reduce risk or accomplish all three, it can be valuable, particularly as a manufacturer, to consider how IAM can help.  As one example, Associated Materials, talks in depth about accomplishing these goals in this case study:

Case Study: Associated Materials speeds up account provisioning, reduces audit prep time, accomplishes 100% password compliance, and eliminates orphan account risk with Courion.


Kuppinger ColeLessons learned from the Heartbleed incident [Technorati links]

April 21, 2014 10:18 AM
In Alexei Balaganski

Two weeks have passed since the day the Heartbleed Bug has been revealed to the world, and people around the world are still analyzing the true scale of the disaster. We’ve learned quite a lot during these two weeks:

Anyway, it will probably take more time to reveal all possible ramifications of the Heartbleed incident, but I believe that I can already express the most important lesson I have learned from it:

The Internet is unfortunately not as safe and reliable as many people, even among IT experts, tend to believe, and only a joint effort can fix it.

Sure, we’ve known about data leaks, malware attacks, phishing, etc. for years. However, there is a fundamental difference between being hacked because of ignoring security best practices and being hacked because our security tools are flawed. One thing is forgetting to lock your door before leaving, the other is locking it every time and one day discovering that the lock can be opened with a fingernail. An added insult in this particular case was that people still using outdated OpenSSL versions were not affected by the bug.

As I wrote in my previous post, the Hearbleed bug has exposed a major flaw in the claim that Open Source software is inherently more secure because anyone can inspect its source code and find vulnerabilities. This claim does not just come from hardcore OSS evangelists; for example, BSI, Germany’s Federal Office for Information Security, is known for promoting Open Source software as a solution for security problems.

Although I believe that in theory this claim is still valid, in reality nobody will do a security audit just out of curiosity. Even the project developers themselves, often outnumbered and underfunded, cannot be blindly expected to maintain the security standard high enough. It’s obvious that a major intervention is needed to improve the situation: both as financial support from corporations that use open source software in their product and as more strict government regulations.

In fact, the Heartbleed accident may have acted as a catalyst: Germany’s SPD party is currently pushing for government support for Open Source security, above all in form of formal security audit carried out by BSI and funded by the government. TrueCrypt, another widely used OSS encryption software, is currently undergoing a public audit supported by crowdfunding. I can only hope that corporations will follow the suit.

For software developers (both commercial and OSS) an important lesson should be not to rely blindly on third party libraries, but treat them as a part of critical infrastructure, just like network channels and electrical grid. Security analysis and patch management must become a critical part of development strategy for every software and especially hardware vendor. And of course, no single approach for security is ever going to be reliable enough – you should always aim for layered solutions combining different approaches.

One of those approaches, unfortunately neglected by many software vendors, is hardening applications using static code analysis. Surely, antimalware tools, firewalls and other network security tools are an important part of any security strategy, but one has to understand that all of them are inherently reactive. The only truly proactive approach to application security is making applications themselves more reliable.

Perhaps unsurprisingly, code analysis tools, such as the solution from Checkmarx or more specialized tools my colleague reviewed earlier, don’t get much hype in media, but there have been amazing advances in this area since the years when I’ve personally been involved in large software development projects. Perhaps they deserve a separate blog post.

April 20, 2014

Julian BondProper, full on rant about the evil that is pet cats and dogs. [Technorati links]

April 20, 2014 09:54 AM
Proper, full on rant about the evil that is pet cats and dogs.

I approve of this message. Not only do they need daily emptying which leads to the poo-fairy and cat shit in the garden, but they kill song birds, and the pet food industry causes untold havoc. Have you any idea how hard it is to get organic, free range, cat food? Even in Waitrose? Never mind the happiness of your cat, what about the happiness of the animals that were bred to feed it? Eh?
 Greenish kitty plague spreads through US websites »
A blight of mindless cat and dog stories has infected US enviro news sites. Who's to blame?

[from: Google+ Posts]

Julian BondHere's a thought experiment for futurists and especially the Californian Long Now enthusiasts. [Technorati links]

April 20, 2014 09:35 AM
Here's a thought experiment for futurists and especially the Californian Long Now enthusiasts.

Imagine for a moment that Spartacus had won, Roman slaves had been freed and a mercantilist middle class had grown up. The Roman Empire hadn't collapsed under the weight of decadence and the Goths. And so a Roman Empire version of the Renaissance and Age of Enlightenment had happened around AD500-600. A Newton and/or Leibniz would have appeared around AD700 (notwithstanding the problems of trying to do Principia Mathematica in Roman numerals!) The industrial revolution would have followed in AD800 and computers, electronic and computer revolutions would have appeared in AD970. Peak oil would have happened around AD1020.

So now in 2014, we'd be about 1000 years into a post industrial, post global-warming, post unlimited-resource world.

What would it look like?

This post was inspired by http://blog.longnow.org/02014/04/19/the-knowledge/ I'm really quite conflicted by the Long Now foundation and the idea of getting futurist darlings like Eno to produce lists of books for a "Manual for Civilisation" or rather a "Library to reboot civilisation". Somewhat in the style of A Canticle for Leibowitz, because this civilisation is doomed and it would be a shame if the last 200 years of growth were lost when it inevitably collapses. See http://blog.longnow.org/category/manual-for-civilization/ It feels like this has changed from a thought experiment to try and get people to think about what we're doing to ourselves into a business selling books and seminars, feeding vampirically on a very western insecurity.

Don't get me wrong, I love the idea of a Clock for 10,000 years ( http://longnow.org/projects/ ) or the Long Player song that doesn't repeat for 1000 years.  ( http://longplayer.org/ ) I just wonder about the way Doom (with a capital D) is becoming an industry in the same way as Self Help, Lifestyle Coaches and all the other WooWoo. There's VC, donations from unbelievably rich tech people, celebrity endorsement, TED talks and so on and on.

Remember the words of Philip Dick; The Roman Empire never ended.
 The Knowledge »
One of the early inspirations for creating the Manual for Civilization was an email I received from Lewis Dartnell in London asking me for information...

[from: Google+ Posts]

Julian Bond"Because Darwin" is NOT the answer to everything. If the Social Darwinists weren't bad enough here's... [Technorati links]

April 20, 2014 09:04 AM
"Because Darwin" is NOT the answer to everything. If the Social Darwinists weren't bad enough here's the Cosmological Darwinists. Apparently the universe is just the black hole's method of producing more black holes.

 What's The Purpose Of The Universe? Here's One Possible Answer »
It's tempting to think of the universe as a meaningless repository for celestial objects like planets and stars. But an intriguing theory suggests there's much more to the cosmos than meets the eye — and that black holes play an integral role in what our universe is actually trying to achieve.

[from: Google+ Posts]
April 17, 2014

Ping Talk - Ping IdentityThis Week in Identity: Privacy Policies Gone Wild [Technorati links]

April 17, 2014 07:44 PM
<p><span style="line-height: 1.62;">So you love Cheerios and you&aposre not afraid to like the brand online, download coupons from the Web site, and sacrifice your legal rights for it. </span><span style="line-height: 1.62;">What?</span></p> <p>People scratch their heads over the privacy policies on social sites like Facebook and Google, but here is first evidence of how those policies could warp as virtual and physical worlds blend.</p> <p>General Mills recently introduced its new privacy policy including "legal terms" that prevent those that demonstrate affinity for the company, such as interacting with the brand online, from later suing the company if an issue arises. People that have a dispute over products are restricted to using informal negotiation via email or going through binding arbitration to seek relief.</p> <p>"Although this is the first case I&aposve seen of a food company moving in this direction, others will follow -- why wouldn&apost you?" said Julia Duncan, director of federal programs and an arbitration expert at the American Association for Justice, a trade group representing plaintiff trial lawyers. "It&aposs essentially trying to protect the company from all accountability, even when it lies, or say, an employee deliberately adds broken glass to a product."</p> <p>One legal expert said, "You can bet there will be some subpoenas for computer hard drives in the future." <a href="http://www.nytimes.com/2014/04/17/business/when-liking-a-brand-online-voids-the-right-to-sue.html?_r=0">The New York Times has the scoop.</a></p> <p><em>Update: <a href="http://www.nytimes.com/2014/04/18/business/general-mills-amends-new-legal-policies.html">General Mills has now amended its new policies</a>. </em></p> <p> <span style="line-height: 1.62;">For more scoops of identity-related goodness, read on.</span></p> <p><span style="line-height: 1.62;">General</span></p> <ul> <li><a href="http://nakedsecurity.sophos.com/2014/04/12/heartbleed-would-2fa-have-helped/?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+nakedsecurity+%28Naked+Security+-+Sophos%29">Paul Ducklin: "Heartbleed" - would 2FA have helped?</a><br />Because of the global password reset pandemic, lots of Naked Security readers have asked, "Wouldn&apost 2FA have helped?" You know a password. You have possession of a mobile phone that receives a one-off authentication code. We&aposre going to focus entirely on that sort of 2FA.</li> <li><a href="http://blog.cloudflare.com/the-results-of-the-cloudflare-challenge">The Results of the CloudFlare Challenge</a><br />Earlier today we announced the <a href="https://www.cloudflarechallenge.com/heartbleed">Heartbleed Challenge</a>. We set up a nginx server with a vulnerable version of OpenSSL and challenged the community to steal its private key. The world was up to the task: two people independently retrieved private keys using the Heartbleed exploit.</li> <li><a href="http://www.modernhealthcare.com/article/20140416/BLOG/304169995/1-in-5-healthcare-workers-share-passwords-survey-warns">Joseph Conn: 1 in 5 healthcare workers share passwords, survey warns</a><br />More than 1 in 5 healthcare workers share their passwords with colleagues, a security no-no, but healthcare security pros can take some solace that such risky business is no worse in their industry than some others. Workers in the legal trade, for example, share passwords about as often as in healthcare (22%), according to findings in a report based on a survey of 250 healthcare IT security professionals in the U.S. and another 250 in the U.K. </li> </ul> <p> <span style="line-height: 1.62;">APIs</span></p> <ul> <li><a href="http://blog.tsheets.com/2014/api/using-oauth-2-0-to-authenticate-with-rest-ful-web-apis.html">Using OAuth 2.0 to Authenticate with REST-ful Web API&aposs</a><br />By the end of this article, if you follow along you&aposll have an OAuth access token that you can use to interact with an API. We&aposre going to do all of this without writing a single line of code.</li> <li><a href="http://www.3scale.net/2014/04/the-five-axioms-of-the-api-economy-axiom-1/">Craig Burton and Steven Willmott: The Five Axioms of the API Economy, Axiom #1-- Everything and Everyone will be API-enabled</a><br />The API Economy is a phenomenon that is starting to be covered widely in technology circles and spreading well beyond, with many companies now investing in API powered business strategies. </li> </ul> <p> <span style="line-height: 1.62;">IoT</span><span style="line-height: 1.62;"> </span></p> <ul> <li><a href="http://user.wordpress.com/2013/11/24/pebble-steals-your-email-address-from-an-unsubscribed-form/">Alex Ewerlof: Pebble steals your email address from an unsubscribed form</a><br />Pebble makes smart watches -the kind of watch with a digital display that connects to your phone to show your messages and information that are shared via an application installed on the phone. Their website promises that it "can" do a lot and I have no doubt that there&aposs at least one thing it can do great: stealing my information!</li> <li><a href="http://qz.com/156075/internet-of-things-will-replace-the-web/">Christopher Mims: How the "internet of things" will replace the web</a><br />Most of us don&apost recognize just how far the internet of things will go, from souped-up gadgets that track our every move to a world that predicts our actions and emotions. In this way, the internet of things will become more central to society than the internet as we know it today.</li> <li><a href="http://www.forrester.com/home/">Where will you be affixing your next sensor?</a><br /><i><a href="https://www.pingidentity.com/blogs/pingtalk/assets_c/2014/04/0414%20This%20week%20in%20identity%20Wearable%20graphic-383.html" onclick="window.open(&aposhttps://www.pingidentity.com/blogs/pingtalk/assets_c/2014/04/0414%20This%20week%20in%20identity%20Wearable%20graphic-383.html&apos,&apospopup&apos,&aposwidth=620,height=673,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0&apos); return false"></a><img alt="0414 This week in identity Wearable graphic.png" src="https://www.pingidentity.com/blogs/pingtalk/0414%20This%20week%20in%20identity%20Wearable%20graphic.png" width="620" height="673" class="mt-image-center" style="text-align: center; display: block; margin: 0 auto 20px;" /></i></li> </ul> <p><span style="line-height: 1.62;">Events</span></p> <ul> <li><a href="http://www.infosec.co.uk/">Info Sec UK</a><br />April 29-May 1; London<br />More than 13,000 attendees to Europe&aposs largest free-to-attend conference. Identity management, mobile, managed services and more.</li> <li><a href="http://www.internetidentityworkshop.com/">IIW</a><br />May 6-8, Mountain View, Calif.<br />The Internet Identity Workshop, better known as IIW, is an un-conference that happens at the Computer History Museum in the heart of Silicon Valley.</li> <li><a href="http://www.gluecon.com/2014/">Glue Conference 2014</a><br />May 21-22; Broomfield, Colo.<br />Cloud, DevOps, Mobile, APIs, Big Data -- all of the converging, important trends in technology today share one thing in common: developers. </li> <li><a href="http://www.kuppingercole.com/book/eic2014">European Identity &amp; Cloud Conference 2014</a><b><br /></b>May 13-16, 2014; Munich, Germany<br />The place where identity management, cloud and information security thought leaders and experts get together to discuss and shape the Future of secure, privacy-aware agile, business- and innovation driven IT.</li> <li><a href="http://www.gartner.com/technology/summits/na/catalyst/">Gartner Catalyst - UK</a><br />June 17-18, London<br />A focus on mobile, cloud, and big data with separate tracks on identity-specific IT content as it relates to the three core conference themes.</li> <li><a href="http://bit.ly/1eyMQQ3">Cloud Identity Summit 2014</a><br />July 19-22, Monterey, Calif.<br />The modern identity revolution is upon us. CIS converges the brightest minds across the identity and security industry on redefining identity management in an era of cloud, virtualization and mobile devices.</li> <li><a href="http://www.gartner.com/technology/summits/na/catalyst/">Gartner Catalyst - USA</a><br />Aug. 11-14, San Diego, CA<br />A focus on mobile, cloud, and big data with separate tracks on identity-specific IT content as it relates to the three core conference themes.</li> </ul>

KatasoftSocial Login: Facebook & Google in One API Call [Technorati links]

April 17, 2014 03:01 PM

Integrating to Facebook, Google, and other social providers can be a pain. Do you want to deal with Facebook and Google tokens and their idiosyncrasies every time you build a new app? Probably not.

We at Stormpath frequently get requests to automate social login and integration for our customers so that they don’t have to build it themselves. Well, we’ve done that— Hooray! This post is about some of the design challenges and solutions we worked through while implementing this feature for our customers.

Social Login with Stormpath – Quick Description

Stormpath’s first OAuth integration makes it easy to connect to Google and Facebook in one simple API call. It’s a simple and secure way to retrieve user profiles and convert them into to Stormpath accounts so that no matter the service you’re using, you have one simple user API.

Goals and Design Challenges

My primary goal was to make this easy for developers. At the same time, it needed to be robust enough so that Stormpath could use it as the foundation to integrate with future identity providers like Twitter and GitHub.

This came with some design challenges:

To solve this problem, Stormpath allows you to create a new password for a social account in Stormpath. If you want, the user can specificy a password upon first registering and/or by initiating a password reset flow. Ultimately, the user can now choose how they want to log in regardless of how they registered.

Making Google Access Tokens Painless

Getting an access token for a Google user in a web server application is not as easy as one might hope. Once the end-user has authorized your application, Google will send an “authorization code” as query parameter to the “redirect_uri” you specified in the developers console when you created your “Google project”. Finally, you’ll have to exchange the “authorization code” for an access token.

Of course, each of these calls require their own set of headers, parameters, etc. Fun times.

We wanted to reduce this burden for developers, so our Google integration conveniently automates the “exchange code” flow for you. This allows you to POST the authorization code and then receive a new (or updated) account, along with the access token, which can be used for further API calls.


At Stormpath one of our main responsibilities is securing user data. When it comes to social integration, we ensure that Facebook and Google client secrets are encrypted using strong AES 256 (CBC) encryption, using secure-random Initialization Vectors. Every encryption key is tenant-specific, so you can guarantee that your encrypted secrets are only accessible by you.

Also, Facebook Login Security recommends that every server-to-server call is signed to reduce the risk of misuse of a user access token in the event it’s stolen. If your access token and you don’t require all your server calls to be signed, the thief can use your application as spam or read user’s private data.

Securing Facebook requests makes your application less vulnerable to attacks, and that’s why we recommend you to enable the Require proof on all calls setting for your Facebook application. Stormpath does this by default.

How does this work? Signing a call to Facebook just means adding the “appsecret_proof” parameter to every server request you make.

The content of the additional parameter is the hash value (SHA-256) of the user’s access with the Facebook application secret. Finally, the generated bytes are encoded using Hexadecimal characters.

appsecret_proof_value = Hex.encodeToString(hmac(SHA-256, access_token, app_secret)

How To Get Started With Stormpath Social Integration

To use Google or Facebook with Stormpath, follow these three steps:

  1. Create a Facebook or Google Directory in Stormpath to mirror social accounts. You can do this via the Stormpath Admin Console, or our REST API using a POST like this:
POST https://api.stormpath.com/v1/directories?expand=provider
Content-Type: application/json;charset=UTF-8

  "name" : "my-google-directory",
  "description" : "A Google directory",
  "provider": {
    "providerId": "google"
  1. Assign the created directory to your application in Stormpath.
  2. Populate your directory with social accounts from Google or Facebook using the application’s accounts endpoint.

That is it! Your application can now access social accounts. And you didn’t have to touch any OAuth!

Future Stormpath releases will support additional social account providers. Please give us your feedback and let us know which ones we should release next!

April 16, 2014

KatasoftCryptography 101 [Technorati links]

April 16, 2014 03:00 PM

Cryptography is an important part of almost every web or mobile application and yet most developers feel that they don’t understand it or worse, are doing it wrong. Yes, the field of cryptography is dominated by uber-smart mathematicians, researchers, and PhDs galore but thanks to the hard work of those people, we have great tools that the average dev can benefit from.

Here at Stormpath, our goal is to make developer’s lives easier and help them build more secure applications faster. Sharing the basics of cryptography will hopefully help you in your development projects.

What is Cryptography

Cryptography is the practice of protecting information from undesired access by hiding it or converting it into nonsense. For developers, cryptography is a notoriously complex topic and difficult to implement.

The goal of this guide is to walk you through the basics of Cryptography. How some of these algorithms and techniques are implemented is often dependent on your chosen language and/or framework but this guide will hopefully help you know what to look for. If you’re a Java developer, check out Apache Shiro. It’s a popular security framework thats makes implementing crypto much easier than messing with the JCE. ::cough:: We’re the authors.


First, a bit of terminology. In crypto-land, you can break most algorithms down to ciphers and hashes.

When to use a Cipher vs a Hash

Put simply, you should use a hash when you know you will not need the original data in plain text ever again. When would you ever take data that you don’t ever need it’s original value? Well, passwords are a good example.

Let’s say Sean’s password is ‘Suck!tTrebek’. Your application doesn’t actually need to know the raw value of the password. It just needs to be able to verify that any future authentication attempt by Sean gives you a matching value. Hashes are great for that because the hash, while gibberish, will always be the same for a particular input, thereby letting you verify a match. Because you just need the hash to perform comparisons, the big benefit to you is that you don’t need to store Sean’s raw (plaintext) password directly in your database, which would be a bad idea.

Another example is a checksum. A checksum is a simple way to know if data has been corrupted or tampered with during storage or transmission. You take the data, hash it, and then send data along with the hash (the checksum). On the receiving end, you’ll apply the same hashing algorithm to the received data and compare that value to the checksum. If they don’t match, your data has been changed. This is one of the strategies Stormpath uses to prevent man-in-the-middle attacks on API requests.

Ciphers on the other hand are your tool if you will need the original raw value (called ‘plaintext’ in cryptography, even if its binary data) eventually. Credit card numbers are a good example here. Sean gives you his credit card and later you’ll need the plain text value to process a transaction. In order to encrypt and decrypt any piece of data you’ll need deal with keys and make sure you’re keeping them safe.

Working with Ciphers

Want to encrypt and decrypt your sensitive data like a boss? Let’s talk ciphers.

Choosing a Cipher Algorithm

If you’re reading this… well, then you should probably stick to AES-256 Encryption as it is approved by the US Military for top-secret material.

Most languages and many security frameworks support multiple cipher algorithms but determining which is appropriate for you is a complex topic outside the scope of this guide. Consult your local cryptanalyst.

Ciphers typically come in a few varieties. Symmetric or Asymmetric Ciphers. Symmetric ciphers can be further categorized into Block or Stream ciphers. Let’s discuss the differences.

Symmetric vs Asymmetric Cipher

Symmetric encryption (aka secret key encryption) uses the same (or trivially similar) key to both encrypt and decrypt data. As long as the sender and receiver both know the key, then they can encrypt and decrypt all the messages that use that use that key. AES and Blowfish are both Symmetric (Block) Ciphers.

But there could be a problem. What happens if the key falls into the wrong hands. Oh No!

Asymmetric ciphers to the rescue. Asymmetric encryption (aka public key encryption) uses a pair of keys, one key to encrypt and the other to decrypt. One key, the public key, is published openly so that anyone one can send you a properly encrypted message. The other key, the private key, you keep secret to yourself so that only you can decrypt those messages. Asymmetric sounds perfect, right? It has limitations too, unfortunately. It’s slower, using up more computing resources than a symmetric cipher and it requires more coordination between parties for each direction of communication.

Consider defaulting to asymmetric encryption until your project requirements suggest otherwise. And always remember to properly secure you private keys.

Block vs Stream Cipher

Symmetric ciphers can be categorized into two sub-categories: Block Ciphers and Stream Ciphers. A Stream Cipher has a streaming key that is used to encrypt and decrypt data. A Block Cipher uses a ‘block’ of data (a ‘chunk’ of bytes = a byte array) as the key that is used to encrypt and decrypt. We recommend most people use Block Ciphers since byte array keys are easy to work with and store. AES and Blowfish are both Block Ciphers for example.

Stream ciphers have their benefits (like speed) but are harder to work with so we don’t generally recommend them unless you know what you’re doing.

There’s a common misconception that Block ciphers are for block data like files and data fields while Stream ciphers are for stream data like network streams. This is not the case. The word ‘stream’ reflects a stream of bits used for the key, not the raw data (again, _plaintext) being encrypted or decrypted.

Securing Keys… Yes

Speaking of securing your keys, you should! No, I’m serious. If someone ever gets a hold of your private keys, all your encrypted data might as well be in plaintext. Like most everything in Cryptography, there are very advanced strategies but most are outside the skill set and budget of most developers. If you can afford key management software, then it’s your best and safest bet. Otherwise, you can still reduce some risk with basic strategies.

  1. Keep your keys on another server and database than your encrypted data.
  2. Keep that server’s security patches 100% up-to-date all the time (Firmware, OS, DB, etc).
  3. Lock down its network access as much as you can. Invest in a good firewall.
  4. Limit who on your team has access to the server and enforce multi-factor authentication for them to access it.

Working with Hashes

Don’t actually care what the original input value was but still need to do other things like matching? Let’s hash it out… get it?

Choosing a Hash Algorithm

Here too, most languages and many security frameworks support a variety of algorithms including MD-2, MD-5 and SHA-1, SHA-256, SHA-384, SHA-512, BCrypt, PBKDF2, and Scrypt. Unless you have a requirement for a particular hashing algorithm, we recommend you stick to BCrypt for secure data like passwords. It is a widely used and reviewed algorithm.

If you’ve heard of Scrypt before, then you’re probably wondering “Shouldn’t I be using Scrypt? Isn’t it better?” Maybe. Cryptanalysts are nothing if not paranoid (to our benefit). The prominent view among experts is that Scrypt is very promising but still too new to be considered a guaranteed bet for most people. So we recommend you stick with Bcrypt for now until more concrete research confirms SCrypt as better.

Salting and Repeated Hashing

For secure data, hashing is often not enough. The same message hashed will always have the same output and can therefore be susceptible to dictionary and rainbow table attacks.

To protect against these attacks, you should use salts. A salt is (preferably) randomly generated data that is used as an input to a hashing algorithm to generate the hash. This way, the same input message with two different salts will have different hashes.

Again, think of a password. If your password is 12345, then the hashed output would be the same anyone else who uses the same password (and very easy to guess for an attacker), unless the password is salted as well and each person has a different salt. Oh, and please don’t use the same salt for every record, that’s not a good idea. Different secure random salts for each password hash is a good idea.

In addition to salting, repeated hashing is recommended. Repeated hashing increases the time it takes someone to try to guess a password. For a user in human time, a difference of milliseconds to half a second is probably negligible. But to an attacker running a script with billions of password candidates, the added time per password can change the time it takes to attack a password from hours to years or even centuries! You’ll need to play with the appropriate number of iterations (or ‘rounds’ in BCrypt speak) in order to find the appropriate balance of security and performance.

What work factor (or # of iterations) should I use?

The complexity of your hashing algorithm comes down to performance versus security. If you’re talking about passwords, then a good rule of thumb is 500 milliseconds to process the algorithm on your production hardware. Most people won’t notice a half second delay during an authentication but it will make an attack prohibitively expensive for many attackers.

Keeping your Hashes and Ciphers up to date

The limiting factors to someone cracking your hashed or encrypted data is compute power and time. If a password is properly hashed with strong salts and a high number of iterations (work factor), it might take an attacker years to break a single password— today. With passwords in particular, you could have a high enough complexity factor so that it takes 0.5 to 1 second to process a password. But Moore’s law presents a problem. Every year compute power doubles and compute costs drop significantly. Moreover, new technologies pop up all the time that give attackers new advantages, like elastic on-demand compute clouds or power GPUs. So, what was a pretty secure hashing strategy today, may not be tomorrow. For any production application, you should be evaluating your strategy at least once a year and perhaps ratcheting up things like the number of hash iterations or salt sizes.

About Stormpath

Julian BondWhat a most excellent collection of images. [Technorati links]

April 16, 2014 07:56 AM

Vittorio Bertocci - MicrosoftCalling Office365 API from a Windows Phone 8.1 App [Technorati links]

April 16, 2014 07:34 AM

Did you install the preview of Windows Phone 8.1? I sure did, and it’s awesome!

Windows Phone 8.1 introduces a great new feature, which was until recently only available on Windows 8.x: the WebAuthenticationBroker (WAB for short from now on). ADAL for Windows Store leverages the WAB for all of its authentication UI rendering needs, and that saved us a tremendous amount of work in respect to other platforms (such as classic .NET) on which we had to handle the UX (dialog, HTML rendering, navigation, etc) on our own.

To give you a practical example of that, and to amuse myself during this 9.5 hours Seattle-Paris flight I am sitting on, I am going to show you how to use the WAB on Windows Phone 8.1 to obtain a token from Azure Active Directory: you’ll see that the savings in respect to the older Windows Phone sample (where I did have to handle the UX myself) are significant. If you prefer to watch a video, rather than putting up with my logorrhea, check out the recording of the session on native clients I delivered at //BUILD just 10 days ago: the very first demo I show is precisely the same app, though I cleaned up the code a bit since then.

The WebAuthenticationBroker and the Continuation Pattern

The WAB on Windows Phone 8.1 differs from its older Windows 8.x sibling in more than the size of its rendering surface. The one difference you can’t ignore (and the reason for which you can’t just reuse ADAL for Windows Store on the phone) lies in the programming model it exposes. Note: the WAB coverage on MSDN is excellent, and I recommend you refer to it rather than relying on what I write here (usual disclaimers apply). Here I’ll just give a barebone explanation covering the essential for getting the WAB to work with AAD.


Referring to the diagram above. The idea is that (1) whenever you call the phone WAB from your code, your app gets suspended and the WAB “app” takes over the foreground spot. The user goes through (2) whatever experience the identity provider serves; once the authentication flow comes to an end, (3) the WAB returns control to your app and disappears.
Here that’s where things get interesting. For your app, this is just another activation: you need to add some logic to detect that this activation was caused by the WAB returning from an authentication, and ensure that the values returned by the WAB are routed to the code that needs to resume the authentication logic and process them.
The idea is that you need in your app an object which implements a well-known interface (IWebAuthenticationContinuable), which includes a method (ContinueWebAuthentication) meant to be used as the re-entry point at reactivation time. In the diagram above it is the page itself that implements IWebAuthenticationContinuable; the OnActivated handler (4) calls the method directly, passing the activation event arguments which will be materialized in ContinueWebAuthentication as WebAuthenticationBrokenContinuationArgs. Those arguments will contain the values you typically expect from the WAB, such as the authorization code produced by an OAuth code grant flow.

This is a common pattern in Windows Phone 8.1: it goes under the name “AndContinue”, from the structure of the primitives used. It is applied whenever a “system” operation (such as the WAB, but also file picking) might end up requiring a lot of resources, making it hard for an app on a low power device to keep active in memory both the app and the process handling the requested task. Once again, MSDN provides great coverage for this.

The Sample

Too abstract for your taste? Presto, let’s look at some code. Here I will skip all of the client app provisioning in AAD, as we’ve covered that task many times. If you want a refresher, just head to one of the samples on GitHub and refer to the instructions there. <LINK>

As mentioned in the title, we want an app that will invoke an Office365 API. We won’t do anything fancy with the results, as I just want to show you how to obtain and use a suitable token. If you want to get a trial of Office 365, check out this link <LINK>. Also, if you don’t want to set up a subscription you can easily repurpose this sample to call any other API (such as the Graph or your own).

Ready? Go! Create a new blank Windows Phone 8.1 app. Make sure to pick up the store flavor. Add a button for triggering the API call.

In your main page, add the declaration for your IWebAuthenticationContinuable. Note that you can decide to return a value, if you so choose.

 interface IWebAuthenticationContinuable
   void ContinueWebAuthentication(WebAuthenticationBrokerContinuationEventArgs args);


That done, add it to the page declaration as an implemented interface, and add the logic for requesting tokens and using them via the continuation model. We’ll flesh those stubs out in a moment.

public sealed partial class MainPage : Page, IWebAuthenticationContinuable


   private void btnInvoke_Click(object sender, RoutedEventArgs e)

   public async void ContinueWebAuthentication(WebAuthenticationBrokerContinuationEventArgs args)
       string access_token = await RequestToken(args.WebAuthenticationResult);
       HttpClient httpClient = new HttpClient();
       httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", access_token);
       HttpResponseMessage response = httpClient.GetAsync("https://outlook.office365.com/EWS/OData/Me/Inbox/Messages?$filter=HasAttachments eq true&$select=Subject,Sender,DateTimeReceived").Result
       if (response.IsSuccessStatusCode)


The btnInvoke_Click triggers the request for a token, via the call to the yet-to-be-defined method RequestCode(). We know that requesting a code will require user interaction, hence we can expect that the call to AuthenticateAndContinue (hence the app deactivation & switch to the WAB) will take place in there. That explains why there’s nothing else after the call to RequestCode.

The ContinueWebAuthentication method implements the logic we want to run once the execution comes back from the WAB. The first line, calling the yet-to-be-defined RequestToken, takes the results from the WAB and presumably uses it to hit the Token endpoint of the AAD’s authorization server.

The rest of the method is the usual boilerplate logic for calling a REST API protected by the OAuth2 bearer token flow – still, I cannot help but marvel at the amazing simplicity with which you can now access Office resources. With that simple (and perfectly readable!) string I can obtain a list of al the messages with attachments from my inbox, and even narrow down to which fields I care about Smile

Let’s take a look at the code of RequestCode and RequestToken.

 string Authority = "https://login.windows.net/developertenant.onmicrosoft.com";
 string Resource = "https://outlook.office365.com/";
 string ClientID = "43ba3c74-34e2-4dde-9a6a-2671b53c181c";
 string RedirectUri = "http://l";

  private void RequestCode()
         string authURL = string.Format(
         WebAuthenticationBroker.AuthenticateAndContinue(new Uri(authURL), new Uri(RedirectUri), null, WebAuthenticationOptions.None);

     private async Task<string> RequestToken(WebAuthenticationResult rez)
         if (rez.ResponseStatus == WebAuthenticationStatus.Success)
             string code = ParseCode(rez.ResponseData);
             HttpClient client = new HttpClient();
             HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, string.Format("{0}/oauth2/token", Authority));
             string tokenreq = string.Format(
             request.Content = new StringContent(tokenreq, Encoding.UTF8, "application/x-www-form-urlencoded");

             HttpResponseMessage response = await client.SendAsync(request);
             string responseString = await response.Content.ReadAsStringAsync();

             var jResult = JObject.Parse(responseString);
             return (string)jResult["access_token"];
             throw new Exception(String.Format("Something went wrong: {0}",rez.ResponseErrorDetail.ToString()));

     private string ParseCode(string result)
         int codeIndex = result.IndexOf("code=", 0) + 5;
         int endCodeIndex = result.IndexOf("&", codeIndex);
         // Return the access code as a string
         return result.Substring(codeIndex, endCodeIndex - codeIndex);


This is mostly protocol mechanics, not unlike the equivalent logic in the older Windows Phone samples I discussed on these pages.

RequestCode crafts the request URL for the Authorization endpoint and passes it to the WAB, calling AuthenticateAndContinue. By now you know what will happen; the app will go to sleep, and the WAB will show up – initialized with the data passed here. MUCH simpler than having to create an in-app auth page and handling navigation by yourself.

RequestToken and its associated utility function ParseCode retrieve the authorization code from the response data returned by the WAB, construct the request for the Token endpoint, hits it, and parses (via JSON.NET, finally available for Windows Phone 8.1! Smile for my //BUILD demo I had to use the data contract serializer, bleah) the access token out from AAD’s response.

If you paid attention to the explanation to how the WAB continuation pattern works, you know that there’s still something missing: the dispatching logic that upon (re)activation routes the WAB results to ContinueWebAuthentication. Open the App.xaml.cs file, locate the OnActivated handler and add the following.

protected async override void OnActivated(IActivatedEventArgs e)
    var rootFrame = Window.Current.Content as Frame;

    var wabPage = rootFrame.Content as IWebAuthenticationContinuable;
    if (wabPage != null)
         wabPage.ContinueWebAuthentication(e as WebAuthenticationBrokerContinuationEventArgs);


Now, I know that my friends in the Windows Phone 8.1 team will frown super-hard at the above. For starters: to do things properly, you should be prepared to be reactivated by multiple *AndContinue. In general, the documentation provides nice classes (like the ContinuationManager) you can use to handle those flows with more maintainable code than the hack I have put together here. My goal here (and during the //BUILD session) was to clarify the new WAB behavior with the least amount of code. Once that is clear to you, I encourage you to revisit the above and re-implement it using the proper continuation practices.

Aaanyway, just to put a bow on this: here there’s what you see when running the app. I landed in Paris and I can finally connect to the cloud Smile

The first page:


Pressing the button triggers RequestCode, which in turns calls WebAuthenticationBroker.AuthenticateAndContinue and causes the switch to the WAB:


Upon successful auth, we get back a token and we successfully call Exchange online:


Ta-dah! Note


Windows Phone 8.1 is a great platform, and WAB is a wonderful addition that will make us identity people very happy. The continuation model will indeed impose a rearranging of the app flow. We are looking at ways in which we can abstract away some of the details for you, so that you can keep operating with the high level primitives you enjoy in ADAL without (too much) abstraction leakage. Stay tuned!

Julian BondNext time somebody tries to tell you that big pharma is hiding medical cures, or the illuminati, sorry... [Technorati links]

April 16, 2014 07:27 AM
Next time somebody tries to tell you that big pharma is hiding medical cures, or the illuminati, sorry, the 1%, are manipulating world society, or big oil invaded Iraq, or similar conspiratorial bullshit, just say:- 

"That's all a bit 'lizard people', isn't it?"
[from: Google+ Posts]

Julian BondAfter the Big Bang is it Space-Time that expands or the distances between the things in it? [Technorati links]

April 16, 2014 07:23 AM
After the Big Bang is it Space-Time that expands or the distances between the things in it?

Something I continue to have trouble getting my head round is the idea that there are bits of the universe that are so far apart (and accelerating away from each other) that there hasn't been enough time since the Big Bang for light to travel between them. So there's a kind of quantum foam of light cones that can't interact. But if nothing can travel faster than the speed of light, then how did these bits of stuff get further apart than light could travel in the available time?  

http://en.wikipedia.org/wiki/Metric_expansion tries to explain this and I think I'm beginning to get it. It also helpfully points out that lots of highly qualified physicist have trouble with understanding this as well so it's not just me! There are bits of it that still feel like handwavium. In particular it feels a bit like http://en.wikipedia.org/wiki/Copenhagen_interpretation in that it's only difficult to think about because you're treating the equations as objective reality. It's all very well to say that it's space-time that's expanding not the stuff in it but, but, 
 Metric expansion of space - Wikipedia, the free encyclopedia »
Basic concepts and overview[edit]. Overview of metrics[edit]. Main article: Metric (mathematics). To understand the metric expansion of the universe, it is helpful to discuss briefly what a metric is, and how metric expansion works.

[from: Google+ Posts]
April 15, 2014

Julian BondWhat do we want? [Technorati links]

April 15, 2014 08:29 AM
What do we want?
Evidence based medicine.

When do we want it?
After full, transparent publication of all trial results both future and historical, peer review and without being encumbered by long term patents.

And we want our governments to subsidise this for the good of society as a whole and to properly enforce the rules with realistic penalties. And without the market being hopelessly skewed by mandated oligopolies bought with high priced lobbying. And without government money being wasted on high priced stockpiles that do nothing. (like Tamiflu: here's looking at you, Roche).

As the article points out, EU regulations pushing for greater transparency on clinical trials are a good thing, but not if they ignore historical results and are never enforced.

 Clinical trials and tribulations: a role for Europe | The Pirate Party »
It's hard to imagine a better fairy-tale villain than a big pharma company. There's something undeniably sinister about these vast, faceless titans with their unfathomable methods and international reach; so much so that it's sometimes an effort to remember that, actually, they're the ones who ...

[from: Google+ Posts]
April 14, 2014

CA on Security ManagementBeware the UnDead Password [Technorati links]

April 14, 2014 10:14 PM
Recently I took my daughters to see the RiffTrax Live showing of Night of the Living Dead.  RiffTrax is a group of three guys who show movies and goof on them (You can get more information here).  Night of the...


KatasoftMultiTenant User Management- the Easy Way [Technorati links]

April 14, 2014 08:29 PM

Building a multi-tenant SaaS isn’t easy, but in a world where your customers expect on-demand services and your engineering team wants a central codebase, multitenancy offers tremendous value. 

The hardest part is user management. Multi-tenant applications come with special user considerations:

As you might have guessed, Stormpath’s data model natively supports multi-tenant user management out-of-the-box. You don’t have to worry about building or managing data partitions yourself, and can focus on building your app’s real features. 

But, how do you build it? We’ve created a comprehensive Guide to Building Multi-tenant Apps and this post will specifically focus on how to model user data for multi-tenancy. We will also show how to build a multi-tenant application faster and more securely with Stormpath, a cloud-hosted user management service that easily supports multi-tenant user models.

What is a Multi-Tenant application?

Unlike most web applications that support a single company or organization with a tightly-coupled database, a multi-tenant application is a single application that services multiple organizations or tenants simultaneously. Multi-tenant apps need to ensure each Tenant has its own private data partition so the data is cleanly segmented from other tenants. The challenge: very few modern databases natively support tenant-based data partitioning. 

Devs must figure out how to do this either using separate physical databases or by creating virtual data partitions in application code.  Due to infrastructural complexities at scale, most engineering teams avoid the separate database approach and implement virtual data partitions in their own application code. 

Our Guide to Building Multi-tenant Apps goes into deep detail on how to set up tenants and their unique identifiers. In this post, we will dive straight into setting up user management for your multi-tenant application.

Multi-Tenant User Management

Why use Stormpath for Multi-Tenant Applications?

Aside from the security challenges that come with partitioning data, setting up partitioning schemes and data models takes time. Very few, if any, development frameworks support multi-tenancy, so developer teams have to build out multi-tenant user management themselves.

Stormpath’s data model supports two different approaches for multi-tenant user partitioning. But first, a little background.

Stormpath Data Model Overview

Most application data models assign user Accounts and groups directly to the application. For example:

Traditional Application User Management Model:

              +-----\>| Account |\
              | 1..\* +---------+\
+-------------+      \^ \
| Application |       |\
+-------------+       v\
              | 1..\* +-------+\
              +-----\>| Group |\

But this isn’t very flexible and can cause problems over time – especially if you need to support more applications or services in the future.

Stormpath is more powerful and flexible. Instead of tightly coupling user accounts and applications, Accounts and Groups are ‘owned’ by a Directory, and an Application can reference one or more Directories dynamically:

Stormpath User Management Model:

                                 +-----\>| Account |\
                                 | 1..\* +---------+\
+-------------+ 1..\* +-----------+     \^\
| Application |-----\>| Directory |      |\
+-------------+      +-----------+       v\
                                 | 1..\* +-------+\
                                 +-----\>| Group |\

A Directory isn’t anything complicated – think of it as simply a ‘top level bucket for Accounts and Groups’. Why did we do it this way?

This directory-based model supports two approaches for partitioning multi-tenant user data:

Approach 1: Single Directory with a Group-per-Tenant

Recommended for most multi-tenant applications.

This design approach uses a single Directory, which guarantees Account and Group uniqueness. A Tenant is represented as a Group within a Directory, so you would have (at least) one Group per Tenant.

For example, let’s assume new user jsmith@customerA.com signs up for your application. Upon submit you would:

  1. Insert a new Account in your designated Directory. This will be a unique account.
  2. Generate a compatible subdomain name for their tenant and create an equivalent Group in your designated Directory. Your ‘Tenant’ record is simply a Group in a Stormpath Directory.
  3. Assign the just-created jsmith@customerA.com Account to the new Group. Any other Accounts added over time to this Group will also immediately be recognized as users for that Tenant.

We cover the many benefits of the Single Directory approach – as well as how to implement it – in the Multi-Tenant Guide , but at a high level, this approach has the following benefits:

The Single Directory, Group-per-Tenant approach is the simplest model, easiest to understand, and provides many desirable features suitable for most multi-tenant applications. Read more.

Approach 2: Directory-per-Tenant

In Stormpath, an Account is unique only within a Directory. This means:

Account jsmith@gmail.com in Directory A

is not the same identity record as

Account jsmith@gmail.com in Directory B.

As a result, you could create a Directory in Stormpath for each of your tenants, and your user Account identities will be 100% separate. With this Directory-per-Tenant approach, your application’s user Accounts are only unique within a tenant (Directory), and users could register for multiple tenants with the same credentials.

Directory-per-Tenant is an advanced data model that offers more flexibility, but at the expense of simplicity. This is the model we use at Stormpath, and it is only recommended for more advanced applications or those with special requirements. 

As a result, we don’t cover the approach in further detail here. If you feel the Directory-per-Tenant approach might be appropriate for your project, and you’d like some advice, just email support@stormpath.com. We are happy to help you model out your user data, whether or not Stormpath is the right option for your application.

We’re Always Here to Help

Whether you’re trying to figure out multi-tenant approaches for your application or have questions about a specific Stormpath API, we’re always here to help. Please feel free to contact us atsupport@stormpath.com.

IDMGOVFICAM TFS TEM on Identity Resolution Needs for Online Service Delivery [Technorati links]

April 14, 2014 07:50 PM
The FICAM Trust Framework Solutions (TFS) Program is convening public and private sector experts in identity proofing, identity resolution and privacy for an Identity Resolution Needs for Online Service Delivery Technical Exchange Meeting (TEM) on 5/1/14 from 9:00 AM - 5:00 PM EST in Washington, DC.


Save the 5/1/14 date! In-person attendance and early registration (due to limited space) are recommended.

Register Now!

Event Location: GSA, 1800 F St NW, Washington, DC 20405

In-person event logistics information will be provided to registered attendees. Remote attendance information will be made available to registered attendees who are not able to attend in-person.

Questions? Please contact the FICAM TFS Program at TFS.EAO@gsa.gov


Identity attributes that are used to uniquely distinguish between individuals (versus describing individuals) are referred to as identifiers. Identity resolution is the ability to resolve identity attributes to a unique individual (e.g. no other individual has the same set of attributes) within a particular context.

Within the context of enabling high value and sensitive online government services to citizens and businesses, the ability to uniquely resolve the identity of an individual is critical to delivering government benefits, entitlements and services.

As part of the recent update to FICAM TFS, we recognized the Agency need for standardized approaches to identity resolution in our Approval process for Credential Service Providers (CSPs) and Identity Managers (IMs).

The study done by the NASPO IDPV Project, "Establishment of Core Identity Attribute Sets & Supplemental Identity Attributes – Report of the IDPV Identity Resolution Project (February 17, 2014)" is currently being used as an industry based starting point for addressing this need. The study proposed 5 equivalent attribute bundles that are sufficient to uniquely distinguish between individuals in at least 95% of cases involving the US population.


However, the FICAM TFS Program recognizes that the NASPO IDPV study, while a starting point, is just the start and not the end. As such, we are convening this TEM to:


If you have expertise in identity resolution, identity proofing and related privacy aspects, and have data-backed research and results to share on this topic, we are interested in hearing from you. Please contact us at TFS.EAO@gsa.gov by COB 4/16/14 with your proposed discussion topic.

DRAFT AGENDA for 05/01/2014

The TEM will seek to address this topic across three dimensions: (1) Identity Resolution (2) Privacy and (3) Business Models / Cost.

09:00 AM - 09:30 AM Attendee Check-In
09:30 AM - 09:55 AM Welcome & TEM Overview/Goals

10:00 AM - 10:10 AM FICAM TFS Level Set on Resolution
10:15 AM - 10:40 AM Agency Viewpoint Panel
10:45 AM - 11:10 AM Industry Viewpoint Panel
11:15 AM - 11:45 AM Resolution Discussion / Q&A

11:45 AM - 01:00 PM LUNCH (On your own) & NETWORKING

01:00 PM - 01:10 PM FICAM TFS Level Set on Privacy
01:15 PM - 01:40 PM Agency Viewpoint Panel
01:45 PM - 02:10 PM Industry Viewpoint Panel
02:15 PM - 02:45 PM Privacy Discussion / Q&A

02:45 PM - 03:00 PM BREAK

03:00 PM - 03:10 PM FICAM TFS Level Set on Business Models / Cost
03:15 PM - 03:40 PM Agency Viewpoint Panel
03:45 PM - 04:10 PM Industry Viewpoint Panel
04:15 PM - 04:45 PM Business Models / Cost Discussion and Q&A

04:45 PM - Event Wrap-up

Sign up for our notification list @ http://www.idmanagement.gov/trust-framework-solutions to be kept updated on this and future FICAM TFS news, events and announcements.

:- by Anil John
:- Program Manager, FICAM Trust Framework Solutions

CourionRe-set Your Passwords, Early & Often [Technorati links]

April 14, 2014 12:29 PM

Access Risk Management Blog | Courion

Jason MutschlerOn Monday April 7th, OpenSSL disclosed a bug in their software that allows data, which can include unencrypted usernames and passwords, to be collected from memory remotely by an attacker.  OpenSSL is the most popular open source SSL (Secure Sockets Layer) implementation and the software is used by many popular websites such as Yahoo, Imgur, Stackoverflow, Flickr and Twitpic.  Many of these popular websites have been patched. However as of this writing some, including Twitpic, remain vulnerable.

HeartbleedSeveral tools have become available to check whether an individual website is vulnerable. We recommend that you double-check whether websites that you use are affected before logging in.  If the website you are logging into is not vulnerable, you should reset your password since the password may have been captured if the server was previously vulnerable.  The bug is also present in some client software and a malicious web server could be used to collect data from memory on client machines running these pieces of software.

httpsThis particular vulnerability has been present since 2012 and underscores the need to look beyond typical perimeter defenses and continuously monitor for unusual behavior within your network.  Persistent attackers will continue to find creative ways to breach the perimeter and detecting abnormal use of valid credentials is becoming extremely important.

By the way, Courion websites, including the Support Portal and the CONVERGE registration page remain unaffected by this vulnerability.


April 13, 2014

Anil JohnStandardizing the RP Requirements for Identity Resolution [Technorati links]

April 13, 2014 06:00 PM

When a credential from an outsourced CSP shows up at the front door of a RP, the RP needs two pieces of information. First, an answer to the question “Are you the same person this credential was issued to?” and second, information to uniquely resolve and enroll the credential holder at the RP. We have more or less standardized the first bit, but have not been as mindful about the second.

I have my own opinions as to why this has not been done before:

At the same time, I do believe that in order to deliver public sector services, it is critical to address this issue. But it needs to be done in a manner that looks at the world as it exists and not as we would wish it to be, which in the U.S. means that:

To make this happen will require three things:

  1. A clear understanding by the RP of the various approaches it can utilize to enroll users
  2. An understanding of the context in which IP/proprietary approaches have a role in identity resolution e.g. At the "identity proofing component"
  3. Development and standardization of the quantitative criteria used by the RP to evaluate the information it needs for identity resolution


This blog post, Standardizing the RP Requirements for Identity Resolution, first appeared on Anil John | Blog. These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Julian BondGlobal Warming won't be as bad as the IPCC predict and will peak at the low end of their predictions... [Technorati links]

April 13, 2014 06:42 AM
Global Warming won't be as bad as the IPCC predict and will peak at the low end of their predictions.

Because society will have collapsed by then.

So that's all good then!


ps. Have you noticed how 2030 is no longer the far future? The doomsayers are predicting major disruption by 2030 which is now only ~15 years away.
 Oil Limits and Climate Change - How They Fit Together »
We hear a lot about climate change, especially now that the Intergovernmental Panel on Climate Change (IPCC) has recently published another report. At the same time, oil is reaching limits, and thi...

[from: Google+ Posts]
April 11, 2014

ForgeRockForgeRock Software Not Affected by ‘Heartbleed’ Security Flaw [Technorati links]

April 11, 2014 09:33 PM

A few days ago, it was announced that there is a major vulnerability in OpenSSL, known as Heartbleed. ForgeRock customers running enterprise software will not be affected by this vulnerability.

Important notes:

The post ForgeRock Software Not Affected by ‘Heartbleed’ Security Flaw appeared first on ForgeRock.

Mike Jones - MicrosoftJSON Web Key (JWK) Thumbprint Specification [Technorati links]

April 11, 2014 12:47 AM

IETF logoI created a new simple spec that defines a way to create a thumbprint of an arbitrary key, based upon its JWK representation. The abstract of the spec is:

This specification defines a means of computing a thumbprint value (a.k.a. digest) of JSON Web Key (JWK) objects analogous to the x5t (X.509 Certificate SHA-1 Thumbprint) value defined for X.509 certificate objects. This specification also registers the new JSON Web Signature (JWS) and JSON Web Encryption (JWE) Header Parameters and the new JSON Web Key (JWK) member name jkt (JWK SHA-256 Thumbprint) for holding these values.

The desire for this came up in an OpenID Connect context, but it’s of general applicability, so I decided to submit the spec to the JOSE working group. Thanks to James Manger, John Bradley, and Nat Sakimura for the discussions that led up to this spec.

The specification is available at:

An HTML formatted version is also available at:

April 10, 2014

GluuImpact of Heartbleed for Gluu Customers [Technorati links]

April 10, 2014 05:12 PM

This blog provides a good analysis to understand the impact of Heartbleed: http://www.gluu.co/cacert-heartbleed

If you are running a Shibboleth IDP front ended by an Apache HTTPD server, the private SAML IDP key in the JVM’s memory (i.e. tomcat) would not be exposed to the Apache httpd process.

However, if the web server’s private key is compromised, then you have HTTP, not HTTPS!

Password credentials could have leaked. After patching and re-keying the server, people should be advised to reset their password credentials.

I think this is the biggest impact.

It highlights the cost of our societal over-reliance on passwords–basically the cost of doing nothing. Passwords stolen from one site are used elsewhere. So even if your web server wasn’t compromised, a person maybe has the same password in a server that was. So the integrity of password authentication has managed to slip to a new all-time low.


Kuppinger ColeEnterprise Single Sign-On - is there still a need for? [Technorati links]

April 10, 2014 03:48 PM
In KuppingerCole Podcasts

In this KuppingerCole Webinar, we will look at Enterprise Single Sign-On (E-SSO) and the alternatives. Starting with the use cases for single sign-on and related scenarios, we will analyze the technical alternatives. We look at various aspects such as the time for implementation, the reach regarding applications to sign-on, users, and devices and compare the alternatives.

Watch online

GluuCACert Heartbleed Notification [Technorati links]

April 10, 2014 02:45 PM
This note I received from CACert today. It provides a good overview of the HeartBleed vulnerability.

See also Shibboleth Security Advisory

Dear customer, there are news [1] about a bug in OpenSSL that may allow an attacker to leak arbitrary information from any process using OpenSSL. [2] We contacted you, because you have subscribed to get general announcements, or you have had a server certificate since the bug was introduced into the OpenSSL releases and are especially likely to be affected by it. CAcert is not responsible for this issue. But we want to inform members about it, who are especially likely to be vulnerable or otherwise affected. Good news: ========== Certificates issued by CAcert are not broken and our central systems did not leak your keys. Bad news: ========= Even then you may be affected. Although your keys were not leaked by CAcert your keys on your own systems might have been compromised if you were or are running a vulnerable version of OpenSSL. To elaborate on this: ===================== The central systems of CAcert and our root certificates are not affected by this issue. Regrettably some of our infrastructure systems were affected by the bug. We are working to fix them and already completed work for the most critical ones. If you logged into those systems, within the last two years, (see list in the blog post) you might be affected! But unfortunately given the nature of this bug we have to assume that the certificates of our members may be affected, if they were used in an environment with a publicly accessible OpenSSL connection (e.g. Apache web server, mail server, Jabber server, ...). The bug has been open in OpenSSL for two years - from December 2011 and was introduced in stable releases starting with OpenSSL 1.0.1. When an attacker can reach a vulnerable service he can abuse the TLS heartbeat extension to retrieve arbitrary chunks of memory by exploiting a missing bounds check. This can lead to disclosure of your private keys, resident session keys and other key material as well as all volatile memory contents of the server process like passwords, transmitted user data (e.g. web content) as well as other potentially confidential information. Exploiting this bug does not leave any noticeable traces, thus for any system which is (or has been) running a vulnerable version of OpenSSL you must assume that at least your used server keys are compromised and therefore must be replaced by newly generated ones. Simply renewing existing certificates is not sufficient! - Please generate NEW keys with at least 2048 bit RSA or stronger! As mentioned above this bug can be used to leak passwords and thus you should consider changing your login credentials to potentially compromised systems as well as any other system where those credentials might have been used as soon as possible. An (incomplete) list of commonly used software which include or link to OpenSSL can be found at [5]. What to do? =========== - Ensure that you upgrade your system to a fixed OpenSSL version (1.0.1g or above). - Only then create new keys for your certificates. - Revoke all certificates, which may be affected. - Check what services you have used that may have been affected within the last two years. - Wait until you think that those environments got fixed. - Then (and only then) change your credentials for those services. If you do it too early, i.e. before the sites got fixed, your data may be leaked, again. So be careful when you do this. CAcert's response to the bug: ============================= - We updated most of the affected infrastructure systems and created new certificates for them. The remaining will follow, soon. - We used this opportunity to upgrade to 4096 bit RSA keys signed with SHA-512. The new fingerprints can be found in the list in the blog post. ;-) - With this email we contact all members, who had active server certificates within the last two years. - We will keep you updated, in the blog. A list of affected and fixed infrastructure systems and new information can be found at: https://blog.cacert.org/2014/04/openssl-heartbleed-bug/ Links: [1] http://heartbleed.com/ [2] https://www.openssl.org/news/secadv_20140407.txt [3] https://security-tracker.debian.org/tracker/CVE-2014-0160 [4] http://www.golem.de/news/sicherheitsluecke-keys-auslesen-mit-openssl-1404-105685.html [5] https://www.openssl.org/related/apps.html

Kuppinger ColeLeadership Compass: Identity Provisioning - 70949 [Technorati links]

April 10, 2014 12:32 PM
In KuppingerCole

Identity Provisioning is still one of the core segments of the overall IAM market. Identity Provisioning is about provisioning identities and access entitlements to target systems. This includes creating and managing accounts in such connected target systems and associating the accounts with groups, roles, and other types of administrative entities to enable entitlements and authorizations in the target systems. Identity Provisioning is...

April 09, 2014

KatasoftLightweight Authentication and Authorization for MQTT with Stormpath [Technorati links]

April 09, 2014 06:29 PM

This article originally appeared on the HiveMQ blog. A huge ‘Thank You’ to their team for the plugin and writeup!

HiveMQ Logo

Authentication and authorization are key aspects for every Internet of Things application. When using MQTT, topic permissions are especially important for most public-facing MQTT brokers. Learn how you can use Stormpath with HiveMQ to set up fine grained security for your MQTT service in minutes.

For the impatient: You can download the Stormpath HiveMQ plugin here.

Challenges of Authentication and Authorization in the Internet of Things

Security is a big concern in the age of the Internet of Things. More than ever, personal and sensor information are transferred over the Internet. For example, data about the conditions and status in our home or company, as well as chat messages or status updates of our current activity and location. In the wrong hands this kind of information can be exploited to damage people and companies.

Often the problem with architecting for security is not awareness of the challenges and risks, but lies in the implementation of the necessary security measures. Most Developers are focused on building applications and not everybody has deep know-how in implementing secure authentication or authorization.

Stormpath to the Rescue

Stormpath is a User Management API for developers, built for user authentication and authorization in traditional web applications. It can also be used perfectly for Internet of Things applications – no more reinventing the wheel with a manual implementation of user and permission models for your applications. Stormpath saves all users credential in a centralized, cloud-based directory, and users can be assigned to different groups and granted fine-grained permissions.

Stormpath provides a role based access control by adding users to one or more groups, which is ideal for permissions inside one application. In order to create user accounts, groups and so on Stormpath provides a REST API, SDKs for Java, PHP, Ruby, Python and an easy to use WebUI. More details can be found in the extensive documentation on their website. Another important aspect for IoT applications is the constant availability of all services. The basic version of Stormpath is free and is ideal for prototyping and small applications. It does not provide any guarantees on uptime, though. For enterprise and production usage Stormpath provides short response time on support requests and 100% availability SLAs.

Use Stormpath for MQTT Authentication and Authorization

So how can we leverage Stormpath to create authentication and authorization for MQTT clients?

First of all, let’s have a look at its architecture.


The figure shows us that Stormpath is organized in different tenants and each tenant has a cloud directory, which can be accessed by a REST API. The API can be used by a variety of applications. Inside the cloud directory are accounts, groups, directories and applications.

We can use the Stormpath structure to associate MQTT clients with accounts. That means whenever a new MQTT client connects, we query Stormpath if an account with the MQTT username and password exists and only then let the client connect. This handles the authentication scenario pretty straightforwardly.

HiveMQ Stormpath Schema

The authorization behavior can be achieved using Stormpath groups. If an authenticated client wants to publish a message, the MQTT broker can lookup all groups of that particular account, which represent the topics (including wildcards) the client is allowed to use. For example a client wants to publish to home/livingroom/temperature the MQTT broker gets all the groups from Stormpath: home/livingroom/# and checks if the topic matches the permissions of the client. If the clients would only be in the group home/livingroom/light, the permission to publish would be denied.

This described behavior is implemented in our Stormpath Plugin for HiveMQ, which retrieves the necessary authentication and authorization permission from Stormpath.

Using the Stormpath HiveMQ Plugin

Now it is time to get the Stormpath HiveMQ in place and see how simple it is to authenticate a client from Stormpath.

General Setup

stormpath.apiKey.id: <Your API key goes here>
stormpath.apiKey.secret: <Your API key secret goes here>



Please choose the directory, which represents your application name that you set in the property file. The username and password must match the credentials provided by the MQTT client (directory, username, firstname, lastname, email and password are mandatory fields)


Hint: At this point the client can’t publish or subscribe to any topic, because the permission defaults to deny.


More than a proof of concept

While configuring permissions via the Stormpath Web UI is easy and sufficient for a proof of concept, it may be tedious for real applications to maintain all permissions by hand. And here is where Stormpath really excels in conjunction with HiveMQ: You can update all permissions and accounts via the REST API and all changes are automatically applied to your HiveMQ instance. You could integrate Stormpath easily with your user-registration backend and automatically add the correct topic permissions to HiveMQ. Imagine you had a HiveMQ cluster up and running – you can automatically update all the permissions without doing anything.


As we have seen, the setup of Stormpath and HiveMQ is done in minutes and now you have a directory for authentication and authorization in place that can be easily modified by the Web UI and programmatically – while HiveMQ is running!

Anil Saldhana - Red HatJBoss CommunityProjects (including WildFlyAs): OpenSSL HeartBleed Vulnerability [Technorati links]

April 09, 2014 06:29 PM
I want to take this post to summarize that "JBoss community projects including WildFly Application Server are not directly affected by the OpenSSL HeartBleed Vulnerability".

JBossWeb APR

JBossWeb APR functionality requires OpenSSL 0.9.7 or 0.9.8 which is not affected by this vulnerability.

I have consulted the Red Hat Security Response Team before posting this note. We continue to monitor the situation.
Feel free to report any anomalies using http://www.jboss.org/security

We do recommend taking the appropriate precautions.

Please use the links in the references section for gauging indirect exposure to the HeartBleed vulnerability.

Indirect exposure may be possible:


Please refer to the following articles for more information:

Official OpenSSL Official Advisory: https://www.openssl.org/news/secadv_20140407.txt
HeartBleed Information: http://www.heartbleed.com

Red Hat Official Announcement: https://access.redhat.com/site/announcements/781953

CVE:  https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0160

Amazon Web Services Advisory: https://aws.amazon.com/amazon-linux-ami/security-bulletins/ALAS-2014-320/

Official Linux Distribution Pages


Christopher Allen - AlacrityAdvice to SysAdmins & Managers about Heartbleed Bug in SSL [Technorati links]

April 09, 2014 06:15 PM

Christopher Allen - AlacrityGeneral Advice about the Heartbleed Bug in SSL [Technorati links]

April 09, 2014 06:00 PM

Phil Hunt - OracleStandards Corner: Basic Auth MUST Die! [Technorati links]

April 09, 2014 03:57 PM
Basic Authentication (part of RFC2617) was developed along with HTTP1.1 (RFC2616) when the web was relatively new. This specification envisioned that user-agents (browsers) would ask users for their user-id and password and then pass the encoded information to the web server via the HTTP Authorization header. Basic Auth approach quickly died in popularity in favour of form based login where

Julian BondThere's really only one choice left for the US Republican Party: [Technorati links]

April 09, 2014 03:05 PM
There's really only one choice left for the US Republican Party:

Vote Putin-Palin in 2016!


 www.antipope.org/charlie/pix/Vladimir-Putin-riding-a-bear.jpeg »

[from: Google+ Posts]

Kuppinger ColeMigrating away from your current Identity Provisioning solution [Technorati links]

April 09, 2014 11:30 AM
In KuppingerCole Podcasts

Many organizations currently consider migrating away from their current Identity Provisioning solution. There are many reasons to do so: vendors became acquired and the roadmap changed; the requirements have changed and the current solution does not appear being a perfect fit anymore; a lot of money has been spent for little value; the solution does not suit the new requirements of managing external users and access to Cloud services...

Watch online
April 08, 2014

Kuppinger Cole18.06.2014: Moving from Prohibition to Trust: Identity Management in the On Premises and Cloud Era [Technorati links]

April 08, 2014 11:43 PM
In KuppingerCole

Managing and governing access to systems and information, both on-premise and in the cloud, needs to be well architected to embrace and extend existing building blocks and help organizations moving forward towards a more flexible, future-proof IT infrastructure.

Ping Talk - Ping IdentityBulletin: Ping Identity Unaffected by Heartbleed [Technorati links]

April 08, 2014 11:01 PM
<p><em><span style="line-height: 1.62;">(Updated April 15 to include recommendation to update shared credentials)</span></em></p> <p><span style="line-height: 1.62;">While the OpenSSL Heartbleed bug continues to feed a patching frenzy across the Internet, those using PingFederate, PingOne and/or PingAccess can rest easy.</span></p> <p>None of our platforms is vulnerable to the bug. <span>No updates or patches are required. However, customers that share certificates across applications and platforms, including PingFederate, should exercise due diligence on their non-Ping platforms. Ping recommends that c</span><span style="line-height: 1.62;">redentials at risk should be changed out. The change would include any private keys, passwords, shared secrets, and any other credentials on the application that might be used for authentication to PingFederate, or that have some other shared usage within PingFederate. No updates or patches are needed for the Ping software.</span></p> <p><span style="line-height: 1.62;">Ping&aposs Security Engineering confirms that PingFederate does not use the affected software. </span><span style="line-height: 1.62;">But for the sake of transparency, customers should note that we do distribute and use OpenSSL with our Apache Integration Kit for Windows, but our package does not contain the vulnerable code, we don&apost use it to run HTTPS, and it&aposs not a method that is exposed.</span></p> <p>In addition, our Apache Integration Kit for Linux is dependent on the OS&aposs OpenSSL library, but we do not distribute the library - just use it. But it is key to note that we aren&apost using the library in a way that is exposed. However, <span style="line-height: 1.62;">PingFederate may be exposed indirectly to Heartbleed when configurations of PingFederate incorporate certificates created or used by another application or platform that has been compromised, e.g. a shared certificate. Follow our <a href="https://www.pingidentity.com/support/solutions/index.cfm/Heartbleed-and-Ping-Identity-products">recommendations listed here</a>.</span></p> <p>In addition, Beau Christensen, Ping&aposs director of infrastructure operations, confirmed that Ping Identity&aposs cloud services, notably PingOne, are not affected by the Heartbleed vulnerabilities. He said that as a precautionary measure, "we are forcing credential updates across all systems, and are rotating public certificates and keys." <a href="https://status.pingidentity.com/incidents/jyxrz26bwph9" style="line-height: 1.62;">His full report is available here.</a></p> <p>Also, the engineering team for PingAccess, our <span>mobile, Web and API access management platform, </span>confirmed it was not affected by the bug.</p> <p><em>Brian Whitney, Beau Christensen, Paul Marshall, Stephen Edmonds, Andrew King, Bill Jung, Yang Yu and John Fontana contributed to this blog.</em></p> <p></p> <p><img alt="OpenSSL Ping sso. cleared.png" src="https://www.pingidentity.com/blogs/pingtalk/OpenSSL%20Ping%20sso.%20cleared.png" width="620" height="346" class="mt-image-center" style="text-align: center; display: block; margin: 0 auto 20px;" /></p>

Ping Talk - Ping IdentityThis Week in Identity: That Flushing Sound is Trust Leaving the Building [Technorati links]

April 08, 2014 11:00 PM
<p><span style="line-height: 1.62;">The Heartbleed bug landed an MMA-style left hook on the Internet&aposs security jaw this week. Zulfikar Ramza, chief technology officer at Elastica, says<img alt="this_week_in_identity-sm logo.png" src="https://www.pingidentity.com/blogs/pingtalk/this_week_in_identity-sm%20logo.png" width="200" height="76" class="mt-image-right" style="float: right; margin: 0 0 20px 20px;" /> </span><a href="http://venturebeat.com/2014/04/09/heartbleed-broken-trust/?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+Venturebeat+%28VentureBeat%29" style="line-height: 1.62;">Heartbleed cast a shadow over beliefs that the Internet is safe for transactions.</a><span style="line-height: 1.62;"> "For people to be able to transact with confidence online, they had to believe that SSL was sacrosanct." Sadly, it was not.</span></p> <p><a href="http://techcrunch.com/2014/04/09/heartbleed-the-first-consumer-grade-exploit/?ncid=rss&amp;utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29" style="line-height: 1.62;">John Biggs: Heartbleed, The First Security Bug With A Cool Logo<br /></a><span style="line-height: 1.62;">Heartbleed was one of the first "branded" exploits, a computer bug that has been professionally packaged for easy mass consumption. How did Heartbleed.com happen?</span></p> <p><a href="http://xkcd.com/1353/" style="line-height: 1.62;">xkcd&aposs stick-figure look at Heartbleed<br /></a>Exploits aren&apost funny, but in the stick-figure world anything is fair game.</p> <p><span style="line-height: 1.62;">To stem the bleeding, read on...</span></p> <p><span>General</span><span style="line-height: 1.62;"> </span></p> <p><a href="http://www.independentid.com/2014/04/standards-corner-basic-auth-must-die.html"></a></p> <ul> <li><a href="http://www.independentid.com/2014/04/standards-corner-basic-auth-must-die.html">Phil Hunt: Standards Corner: Basic Auth MUST Die!</a><br /> Basic Authentication (part of RFC2617) was developed along with HTTP1.1 (RFC2616) when the web was relatively new. This specification envisioned that user-agents (browsers) would ask users for their user-id and password and then pass the encoded information to the web server via the HTTP Authorization header.</li> <li><a href="http://blog.aniljohn.com/2014/04/context-and-identity-resolution.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+AnilJohn+%28Anil+John+%7C+Blog%29">Anil John: Context and Identity Resolution</a><br /> If identity is defined as a set of attributes that uniquely describe an individual, identity resolution is the confirmation that an identity has been resolved to a unique individual within a particular context. In a federation environment, identity resolution is a means to an end; namely <a href="http://blog.aniljohn.com/2013/08/planning-for-user-enrollment-in-a-federation-redux.html">user enrollment</a>. This blog post looks at identity resolution in two separate contexts, at the identity proofing component and at the RP.</li> <li><a href="https://www.pingidentity.com/blogs/cto-blog/index.cfm/2014/04/warning---explicit-content.cfm">Paul Madsen: Warning! Explicit (Authentication) Content</a><br /> Today&aposs authentication mechanisms are explicit and discontinuous - on some schedule (depending on the resource being accessed) we demand users stop what they are doing (e.g. doing work for us or buying stuff from us) and <i>login</i> - a distinct and unappreciated operation.</li> </ul> <p><span style="line-height: 1.62;"> </span><span style="line-height: 1.62;">APIs</span><span style="line-height: 1.62;"> </span></p> <ul> <li><a href="http://blog.programmableweb.com/2014/04/08/seven-key-messages-from-nordic-apis-that-got-developers-talking/">Mark Boyd: Seven Key Messages From Nordic APIs that Got Developers Talking</a><br />Presentations by Travis Spencer (<a href="http://twobotechnologies.com/">Twobo Technologies</a>) and David Gorton (<a href="http://pingidentity.com/">Ping Identity</a>) shared the latest advances in API neo-security frameworks. Currently, most industry players with an eye to best practice identity management and user authentication are using OAuth 2 and SAML. OpenID Connect is still seen as "the new kid on the block."</li> <li><a href="https://www.youtube.com/watch?v=zhbm_MtSYlg&amp;utm_content=buffere12eb&amp;utm_medium=social&amp;utm_source=twitter.com&amp;utm_campaign=buffer">Toward 1 million APIs (video)</a><br />API Growth is accelerating - with many organizations launching and using APIs. However, we&aposre still in the 10,000&aposs or low 100,000&aposs of APIs range and many are not publicly accessible. What happens when we reach millions of APIs and indeed - how do we get there. A panel at the API Strategy &amp; Practice Conference in Amsterdam talks about future API challenges. Hosted by Steven Willmot the CEO at 3scale.</li> </ul> <p><span>Privacy</span></p> <ul> <li><a href="http://www.scmagazine.com/govwin-iq-hacked-payment-card-data-of-25000-deltek-customers-at-risk/article/342005/?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+SCMagazineNews+%28SC+Magazine+News%29">Kim Zetter: The Feds Cut a Deal With In-Flight Wi-Fi Providers, and Privacy Groups Are Worried</a><br />According to a letter Gogo, the in-flight Wi-Fi provider, submitted to the Federal Communications Commission, the company voluntarily exceeded the requirements of the Communications Assistance for Law Enforcement Act, or CALEA, by adding capabilities to its service at the request of law enforcement. The revelation alarms civil liberties groups, which say companies should not be cutting deals with the government that may enhance the ability to monitor or track users.</li> <li><a href="http://www.theregister.co.uk/2014/04/07/internet_inception_security_vint_cerf_google_hangout/">John Leyden: Vint Cerf wanted to make internet secure from the start, but secrecy prevented it</a><b><br /></b>"I worked with the National Security Agency on the design of a secured version of the internet but we used classified security technology at the time and I couldn&apost share that with my colleagues. If I could start over again I would have introduced a lot more strong authentication and cryptography into the system."</li> <li><a href="http://www.securityweek.com/german-nsa-panels-chairman-quits-spat-over-snowden">German NSA Panel&aposs Chairman Quits in Spat Over Snowden</a><b><br /></b>The chairman of a new German parliamentary panel probing mass surveillance by the NSA abruptly quit on Wednesday, rejecting opposition demands that the body question fugitive US intelligence leaker Edward Snowden.<span> </span></li> </ul> <p>IoT<span> </span></p> <ul> <li><a href="http://www.wired.com/2014/04/this-brilliant-internet-connected-washer-is-a-roadmap-for-the-internet-of-things/">Kyle VanHemert: This Brilliant Washing Machine Is a Roadmap for the Internet of Things</a><br />There couldn&apost be a more perfect example of our absurd obsession with the internet of things than the connected washing machine. Nothing so concisely symbolizes just how ludicrous our mania for connectivity has become as a smartphone app that helps you wash your socks.</li> </ul> <p>Events</p> <ul> <li><a href="http://www.infosec.co.uk/">Info Sec UK</a><br />April 29-May 1; London<br />More than 13,000 attendees to Europe&aposs largest free-to-attend conference. Identity management, mobile, managed services and more.</li> <li><a href="http://www.internetidentityworkshop.com/">IIW</a><br />May 6-8; Mountain View, Calif.<br />The Internet Identity Workshop, better known as IIW, is an un-conference that happens at the Computer History Museum in the heart of Silicon Valley.</li> <li><a href="http://www.gluecon.com/2014/">Glue Conference 2014</a><br />May 21-22; Broomfield, Colo.<br />Cloud, DevOps, Mobile, APIs, Big Data -- all of the converging, important trends in technology today share one thing in common: developers. </li> <li><a href="http://www.kuppingercole.com/book/eic2014">European Identity &amp; Cloud Conference 2014</a><b><br /></b>May 13-16; Munich, Germany<br />The place where identity management, cloud and information security thought leaders and experts get together to discuss and shape the Future of secure, privacy-aware agile, business- and innovation driven IT.</li> <li><a href="http://www.gartner.com/technology/summits/na/catalyst/">Gartner Catalyst - UK</a><br />June 17-18; London<br />A focus on mobile, cloud, and big data with separate tracks on identity-specific IT content as it relates to the three core conference themes.</li> <li><a href="http://bit.ly/1eyMQQ3">Cloud Identity Summit 2014</a><br />July 19-22; Monterey, Calif.<br />The modern identity revolution is upon us. CIS converges the brightest minds across the identity and security industry on redefining identity management in an era of cloud, virtualization and mobile devices.</li> <li><a href="http://www.gartner.com/technology/summits/na/catalyst/">Gartner Catalyst - USA</a><br />Aug. 11-14; San Diego, CA<br />A focus on mobile, cloud, and big data with separate tracks on identity-specific IT content as it relates to the three core conference themes.</li> </ul>

Kuppinger ColeThe Heartbleed Bug in OpenSSL – probably the most serious security flaw in years [Technorati links]

April 08, 2014 03:27 PM
In Alexei Balaganski

As just about every security-related publication has reported today, a critical vulnerability in OpenSSL has been discovered yesterday. OpenSSL is a cryptographic software library, which provides SSL/TSL encryption functionality for network traffic all over the Internet. It’s used by Apache and nginx web servers that serve well over half of the world’s web sites, it powers virtual private networks, instant messaging networks and even email. It’s also widely used in client software, devices and appliances.

Because of a bug in implementation of TLS Heartbeat extension, remote attackers are potentially able to trigger a memory leak on an affected server and obtain different kinds of sensitive information including server’s private keys. The most embarrassing part of this is that the bug has been discovered in past OpenSSL releases dating back to 2012. Specifically, all OpenSSL versions from 1.0.1 to 1.0.1f are vulnerable. The bug has been fixed in the version 1.0.1g, released yesterday.

Needless to say, potential consequences of this vulnerability are huge. Any remote attacker can theoretically leak a server’s private key and then easily decrypt any past or future SSL-encrypted traffic from that server without leaving any traces of an attack. This means that simply patching the vulnerability is not enough; all services involved in handling sensitive information also have to change their private keys and reissue their SSL certificates.

For more information, I recommend checking out heartbleed.com and to see whether your server is compromised, use this test. You can also check the OpenSSL version installed on your server directly.

If your server is compromised, the first step should be updating OpenSSL to 1.0.1g – all major Linux and *BSD distributions have already made updates available. If it’s not possible, you can recompile OpenSSL from the source code with heartbeat support disabled.

Unfortunately, it’s not possible to detect whether your server has already been attacked using this bug. So, to be on the safe side, you should consider reissuing your SSL certificates with new private keys.

One rather sad consequence of the whole Heartbleed debacle is that it delivered a serious blow to a major claim of open source proponents that open source software is inherently more secure because more people can inspect its source code and find possible vulnerabilities. While potentially it is true, in reality not many people would do that for such a large-scale project like OpenSSL just out of curiosity. I can only hope that finally someone will have a good reason to sponsor a proper security audit for OpenSSL and other open source security software.

This is also a good opportunity for service providers to upgrade their SSL key length to ensure more reliable encryption.

Gluu2FA for every site on the Internet? [Technorati links]

April 08, 2014 02:57 PM

You’ve probably seen http://twofactorauth.org:

This site totally misses the point. I think Walmart should be congratulated for not rolling out 2FA. A tightly bundled solution that just solves two factor authentication for their website (which I almost never visit) or in their stores (which I am almost never in), is fantastic. Nice work Walmart!!!

The list I’d like to see is which websites enable me to specify where I want to be authenticated, and hopefully with what mechanism. I can choose a domain for my website and email. Why shouldn’t I be allowed to choose how and where I authenticate?

For many people this domain would be Google.com or Facebook.com. We already have social creds, so in many cases these are a good choice. In other cases, I might want to use my work email to identify my home domain. For example, if I am using a SaaS business application, my work might even be paying for it, so it makes sense that they’d want to control access.

The problem is that in the past, it wasn’t clear what standard websites should adopt to enable distributed authentication. Finally, the answer is clear: OpenID Connect. This standard has the backing of Microsoft, Google, enterprise security vendors, and already has tons of open source implementations and libraries like the OX OpenID Connect Provider.

If the authors of http://twofactorauth.org had actually done their research, they would have discovered that the main reason websites don’t use two-factor is deployment issues. A large enterprise like Walmart needs to identify people who are acting as its employees, customers, and partners. The IT infrastructure is comprised of numerous web services, both internal and third party. Tightly bundling one type of authentication to one application does not really address the security concern.

Ironically, increasing security is an inconvenience to the customer. The best usability is not authenticating me at all. We should congratulate the websites who use authentication intelligently to mitigate the risk of network security. We should not be congratulating knee-jerk adoption of technology that doesn’t enhance usability or security for their site, or for the Internet in general.

Kuppinger ColeLeadership Compass: Enterprise Key and Certificate Management - 70961 [Technorati links]

April 08, 2014 12:17 PM
In KuppingerCole

Enterprise Key and Certificate Management (EKCM) is made up of two niche markets that are converging. This process still continues, and as with all major change of IT market segments, is driven by customer requirements. These customer requirements are driven by security and compliance needs. Up until recent times, compliance has been the bigger driver, but increasingly in the days of Cloud and mobile technology security of data in storage; data in the hands of others that is, security...

Kuppinger ColeHow NOT to protect your email from snooping [Technorati links]

April 08, 2014 09:22 AM
In Alexei Balaganski

Since the documents leaked last year by Edward Snowden have revealed the true extent of NSA powers to dig into people’s personal data around the world, the topic of protecting internet communications has become of utmost importance for government organizations, businesses and private persons alike. This is especially important for email, one of the most widely used Internet communication services.

One of the oldest Internet services still in use (SMTP protocol has been published in 1982), email is based on a set of inherently insecure protocols and by design cannot provide reliable protection against many types of attacks. Hacking, eavesdropping, forged identities, spam, phishing – you name it. Yes, there have been numerous developments to improve the situation over the years: transport layer security, anti-spam and anti-malware solutions, even text encryption. However, all of them are considered optional add-ons to the core set of services, since maintaining backwards compatibility with decades-old systems prevents us from enforcing new security-aware standards and protocols.

For the same reason we cannot just abandon email and switch to new, more secure communication services: most of our correspondents still use email only. Companies providing secured email services have existed for over a decade, but their adoption rates have always been low. Security experts have been fighting this inertia for years, educating the public, developing new protocols and services and pushing for stronger regulations. Alas, people are lazy; they always tend to choose convenience over security.

At least, it was like that until last year. Thanks to Snowden, people suddenly realized that their confidential communications are not just theoretically vulnerable to hacking or other illegal activities. In fact, nearly all their communications are routinely siphoned to huge government datacenters, where they are stored, analyzed and matched to other sources of private information. Even worse, all this is completely legal under current laws, and Internet communications providers are forced to silently cooperate with intelligence services – no hacking required.

Finally, people started to take notice. Finally, not just corporate IT managers, but informed consumers have come to understand that the only reliable protection against all kinds of eavesdropping is end-to-end encryption. Unfortunately, it seems that not everyone understands what exactly “end-to-end encryption” is.

The reason that motivated me to write this post was an article titled “Google encrypts all Gmail communications to protect users from NSA snooping”. Several German email providers like GMX and web.de used the same rhetoric when they have announced similar functionality as well. Even De-Mail, which is a paid service from German government, does not offer mandatory encryption.

Of course, this statement cannot be further from reality. Yes, forcing all users to use encrypted SSL connection to a webmail service is good news. In fact, I would even recommend using a tool like HTTPS Everywhere to enable SSL automatically on many major websites, because it makes browsing more secure and provides protection against man in the middle attacks, which can steal your passwords.

However, when it comes to email, SSL will only protect the “first mile” of your message’s journey to its destination. As soon as it reaches your provider’s mail server, it will be stored on the disk in a completely unencrypted format, open for snooping to server administrators, secret services or hackers. When the message is relayed to the next mail server, chances are that the transport channel won’t be encrypted, too, simply because the other server does not support it. On its way, your mail will be read and analyzed by multiple servers and other devices (anti-spam services, antimalware appliances, firewalls with deep packet inspection and so on). Any of these devices can store a copy for later use or simply collect metadata in form of logs.

For companies like Google, being able to snoop through your emails is even fundamental for their business model: they need to serve you the most relevant ads, increasing their revenues. They can do it legitimately, because it’s part of their TOS. They will even share collected information with third parties. No kind of transport encryption will change that.

Companies building their business model on trust and aiming to provide a truly secure service, both in technical and legal terms, face a different kind of problem. They can simply be forced to hand all master keys over to the government, rendering all encryption useless. Thanks to Ladar Levison of Lavabit, we now know that, too.

Therefore, in my opinion, the only reasonable method of secure email currently available is to use a desktop mail program with a form of public key encryption to encrypt all outgoing mails directly on your computer and to decrypt them directly on the recipient’s computer. Unfortunately, several protocols currently in use (most common are S/MIME and OpenPGP) are incompatible and most mail programs require third party add-ons to implement them. In addition, before you’ll be able to encrypt your messages, you need to exchange encryption keys with the other party over a secure channel (not email!). And, of course, you should always keep in mind that merely the fact that you are using encryption may attract the attention of secret services: an honest man has nothing to hide, doesn’t he? Unfortunately, the way email works it cannot provide any kind of plausible deniability, since message metadata are never encrypted. That’s probably one of the reasons for the recent surge in popularity of ephemeral messaging services like Threema or Telegram, which at least claim not to keep any traces of your messages on their servers. Whether you should trust these claims is, of course, another difficult question…

By the way, the future of encryption and privacy-enabling technologies will be a big topic during our upcoming European Identity & Cloud Conference. Leading experts will join Ladar Levison himself to discuss technical, political and legal challenges. You should be there as well!

Kuppinger ColeIBM’s Software Defined Environment [Technorati links]

April 08, 2014 09:18 AM
In Mike Small

In IBM’s view the kinds of IT applications that organizations are creating is changing from internal facing systems to external facing systems.  IBM calls these kinds of systems “systems of record” and “systems of engagement” respectively.  The systems of record represent the traditional applications that ensure that the internal aspects of the business run smoothly and the organization is financially well governed.  The systems of engagement exploit the new wave of technology that is being used by customers and partners and which takes the form of social and mobile computing.  In IBM’s opinion a new approach to IT is needed to cater for this change which IBM calls SDE (Software Defined Environments).

According to IBM these systems of engagement are being developed to enable organizations to get closer to their customers and partners, to better understand their need and to better respond to their issues and concerns.  They are therefore vital to the future of the business.

However the way these systems of engagement are developed, deployed and exploited is radically different to that for systems of record.   The development methodology is incremental and highly responsive to user feedback.  Deployment requires IT infrastructure that can quickly and flexibly respond to use by people outside the organization.  Exploitation of these applications requires the use of emerging technologies like Big Data analytics which can place unpredictable demands on the IT infrastructure.

In response to these demands IBM has a number of approaches; for example in February I wrote about how IBM has been investing billions of dollars in the cloud.  IBM also has offers something it calls SDE (Software Defined Environment).  IBM’s SDE custom-builds business services by leveraging the infrastructure according to workload types, business rules and resource availability. Once these business rules are in place, resources are orchestrated by patterns—best practices that govern how to build, deploy, scale and optimize the services that these workloads deliver.

IBM is also not alone in this approach and others notably VMWare are heading in the same direction.

In the IBM approach – abstracted and virtualized IT infrastructure resources are managed by software via API invocations.   Applications automatically define infrastructure requirements, configuration and Service Level expectations.  The developer, the people deploying the service as well as the IT service provider are all taken into account by the SDE.

This is achieved by the IBM SDE being built on software and standards from the OpenStack Foundation of which IBM is a member.  IBM has added specific components and functionality to OpenStack to fully exploit IBM hardware and software and these include drivers for: IBM storage devices, PowerVM, KVM and IBM network devices.  IBM has also included some IBM “added value” functionality which includes management API additions, scheduler enhancements, management console GUI additions, and a simplified install.  Since the IBM SmartCloud offerings are also based on OpenStack this also makes cloud bursting into the IBM SmartCloud (as well as any other cloud based on OpenStack) easier except where there is a dependency on the added value functionality.

One of the interesting areas is the support provided by the Platform Resource Scheduler for the placement of workloads.  The policies supported make it possible to define that workloads are placed in a wide variety of ways including: pack workload on fewest physical servers or spread across several, load balancing and memory balancing, keep workloads physically close or physically separate.

IBM sees organizations moving to SDEs incrementally rather that in a big bang approach.  The stages they see are virtualization, elastic data scaling, elastic transaction scaling, policy based optimization and finally application aware infrastructure.

In KuppingerCole’s opinion SDCI (Software Defined Computing Infrastructure) is the next big thing.  Martin Kuppinger wrote about this at the end of 2013. IBM’s SDE fits into this model and has the potential to allow end user organizations to make better use their existing IT infrastructure and to provide greater flexibility to meet the changing business needs.  It is good that IBM’s SDE is based on standards; however there is still a risk of lock-in since the standards in this area are incomplete and are still emerging.   My colleague Rob Newby has also written about the changes that are needed for organizations to successfully adopt SDCI.  In addition it will require a significant measure of technical expertise to successful implement in full.

For more information on this subject there are sessions on Software Defined Infrastructure and a Workshop on Negotiating Cloud Standards Jungle at EIC May 12th to 16th in Munich.

Ben Laurie - Apache / The BunkerFruity Lamb Curry [Technorati links]

April 08, 2014 09:14 AM

My younger son, Oscar, asked me to put bananas into the lamb curry I was planning to cook. Which inspired this:

Chopped onions
Diced ginger
Star anise
Diced leg of lamb
Dried apricot

Fry the onions in the ghee. Add ginger and ground spices and fry for a minute more, then add the diced lamb and brown. Add the raisins, banana (sliced), dried apricot (roughly chopped) and lemon (cut into eighths, including skin) and some yoghurt. Cook on a medium heat until the yoghurt begins to dry out, then add some more. Repeat a couple of times (I used most of a 500ml tub of greek yoghurt). Salt to taste. Eat. The lemon is surprisingly edible.

I served it with saffron rice and dal with aubergines.